ICT Commitment of Traders° by toodegreesDescription:
The Commitment of Traders (COT) is a valuable raw data report released weekly by the Commodity Futures Trading Commission (CFTC). This report offers insights into the current long and short positions of three key market entities:
Commercial Traders ( usually represented in red )
Large Traders ( typically depicted in green )
Small Speculator Traders ( commonly shown in blue )
The concept of utilizing the COT data as a strategic trading tool was first introduced by Larry Williams, who emphasized the importance of monitoring Commercial Speculators – large corporate producers or consumers of commodities.
The Inner Circle Trader (ICT) prompts us to delve deeper into this data. While we can easily determine their Net Position (also referred to as the Main Program) by subtracting Commercial Short Positions from the Commercial Long Positions, this calculation doesn't reveal their ongoing Hedge Program .
Merely following the Main Program won't provide a trading edge. Aligning with the Hedge Program can be an invaluable weapon in your trading arsenal.
The Commercial Speculators' Hedge Program can be unveiled by examining the highest and lowest reading of their Net Position over a chosen time period and setting a new "zero line" between these extremes. This process generates a novel "COT Graph" providing a detailed understanding of the Commercial Speculators' current market activity.
When the Hedge Program, Seasonality, and Open Interest are cross-referenced with Institutional Orderflow, a trader can construct a very clear medium-to-long-term market narrative.
Features:
Access COT Data for the Commercial Speculators via Tradingview's reliable data source
Automate calculations and display the 3-month, 6-month, 12-month, 2-year, and 3-year Hedge Program
Define your own Custom Time Range for the Hedge Program
Display the Main Program and all Hedge Programs in an easy-to-understand table format
Additionally, by following the included instructions, you can augment your table with COT data from multiple markets. This extra information can help monitor correlated markets and develop a more robust market narrative:
Поиск скриптов по запросу "deep股票代码"
Open Interest Profile [Fixed Range] - By LeviathanThis script generates an aggregated Open Interest profile for any user-selected range and provides several other features and tools, such as OI Delta Profile, Positive Delta Levels, OI Heatmap, Range Levels, OIWAP, POC and much more.
The indicator will help you find levels of interest based on where other market participants are opening and closing their positions. This provides a deeper insight into market activity and serves as a foundation for various different trading strategies (trapped traders, supply and demand, support and resistance, liquidity gaps, imbalances,liquidation levels, etc). Additionally, this indicator can be used in conjunction with other tools such as Volume Profile.
Open Interest (OI) is a key metric in derivatives markets that refers to the total number of unsettled or open contracts. A contract is a mutual agreement between two parties to buy or sell an underlying asset at a predetermined price. Each contract consists of a long side and a short side, with one party consenting to buy (long) and the other agreeing to sell (short). The party holding the long position will profit from an increase in the asset's price, while the one holding the short position will profit from the price decline. Every long position opened requires a corresponding short position by another market participant, and vice versa. Although there might be an imbalance in the number of accounts or traders holding long and short contracts, the net value of positions held on each side remains balanced at a 1:1 ratio. For instance, an Open Interest of 100 BTC implies that there are currently 100 BTC worth of longs and 100 BTC worth of shorts open in the market. There might be more traders on one side holding smaller positions, and fewer on the other side with larger positions, but the net value of positions on both sides is equivalent - 100 BTC in longs and 100 BTC in shorts (1:1). Consider a scenario where a trader decides to open a long position for 1 BTC at a price of $30k. For this long order to be executed, a counterparty must take the opposite side of the contract by placing a short order for 1 BTC at the same price of $30k. When both long and short orders are matched and executed, the Open Interest increases by 1 BTC, indicating the introduction of this new contract to the market.
The meaning of fluctuations in Open Interest:
- OI Increase - signifies new positions entering the market (both longs and shorts).
- OI Decrease - indicates positions exiting the market (both longs and shorts).
- OI Flat - represents no change in open positions due to low activity or a large number of contract transfers (contracts changing hands instead of being closed).
Typically, we monitor Open Interest in the form of its running value, either on a chart or through OI Delta histograms that depict the net change in OI for each price bar. This indicator enhances Open Interest analysis by illustrating the distribution of changes in OI on the price axis rather than the time axis (akin to Volume Profiles). While Volume Profile displays the volume that occurred at a given price level, the Open Interest Profile offers insight into where traders were opening and closing their positions.
How to use the indicator?
1. Add the script to your chart
2. A prompt will appear, asking you to select the “Start Time” (start of the range) and the “End Time” (end of the range) by clicking anywhere on your chart.
3. Within a few seconds, a profile will be generated. If you wish to alter the selected range, you can drag the "Start Time" and "End Time" markers accordingly.
4. Enjoy the script and feel free to explore all the settings.
To learn more about each input in indicator settings, please read the provided tooltips. These can be accessed by hovering over or clicking on the ( i ) symbol next to the input.
MTF Stationary Extreme IndicatorThe Multiple Timeframe Stationary Extreme Indicator is designed to help traders identify extreme price movements across different timeframes. By analyzing extremes in price action, this indicator aims to provide valuable insights into potential overbought and oversold conditions, offering opportunities for trading decisions.
The indicator operates by calculating the difference between the latest high/low and the high/low a specified number of periods back. This difference is expressed as a percentage, allowing for easy comparison and interpretation. Positive values indicate an increase in the extreme, while negative values suggest a decrease.
One of the unique features of this indicator is its ability to incorporate multiple timeframes. Traders can choose a higher timeframe to analyze alongside the current timeframe, providing a broader perspective on market dynamics. This feature enables a comprehensive assessment of extreme price movements, considering both short-term and longer-term trends.
By observing extreme movements on different timeframes, traders can gain deeper insights into market conditions. This can help in identifying potential areas of confluence or divergence, supporting more informed trading decisions. For example, when extreme movements align across multiple timeframes, it may indicate a higher probability of a significant price reversal or continuation.
To use the Multiple Timeframe Stationary Extreme Indicator effectively, traders should consider a few key points:
- Choose the Timeframes : Select the appropriate timeframes based on your trading strategy and objectives. The current timeframe represents the focus of your analysis, while the higher timeframe provides a broader context. Ensure the chosen timeframes align with your trading style and the asset you are trading.
- Interpret Extreme Movements : Pay attention to extreme movements that breach certain levels. Values above zero indicate a rise in the extreme, potentially signaling overbought conditions. Conversely, values below zero suggest a decrease, potentially indicating oversold conditions. Use these extreme movements as potential entry or exit signals, in conjunction with other indicators or confirmation signals.
- Validate with Price Action : Confirm the extreme movements observed on the indicator with price action. Look for confluence between the indicator's extreme levels and key support or resistance levels, trendlines, or chart patterns. This can provide added confirmation and increase the reliability of the signals generated by the indicator.
- Consider Volatility Filters : The indicator can be enhanced by incorporating volatility filters. By adjusting the sensitivity of the extreme differences calculation based on market volatility, traders can adapt the indicator to different market conditions. Higher volatility may require a longer lookback period, while lower volatility may call for a shorter one. Experiment with volatility filters to fine-tune the indicator's performance.
- Combine with Other Analysis Techniques : The Multiple Timeframe Stationary Extreme Indicator is most effective when used as part of a comprehensive trading strategy. Combine it with other technical analysis tools, such as trend indicators, oscillators, or chart patterns, to form a well-rounded approach. Consider risk management techniques and money management principles to optimize your trading strategy.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Remember that trading indicators, including the Multiple Timeframe Stationary Extreme Indicator, should not be used in isolation. They serve as tools to assist in decision-making, but they require proper context, analysis, and confirmation. Always conduct thorough analysis and consider market conditions, news events, and other relevant factors before making trading decisions.
It's recommended to backtest the indicator on historical data to assess its performance and effectiveness for your trading approach. This will help you understand its strengths and limitations, allowing you to refine and optimize your usage of the indicator.
Buying/Selling Pressure Cycle (PreCy)No lag estimation of the buying/selling pressure for each candle.
----------------------------------------------------------------------------------------------------
WHY PreCY?
How much bearish pressure is there behind a group of bullish candles ?
Is this bearish pressure increasing?
When might it overcome the bullish pressure?
Those were my questions when I started this indicator. It lead me through the rabbit hole, where I discovered some secrets about the market. So I pushed deeper, and developped it a lot more, in order to understand what is really happening "behind the scene".
There are now 3 ways to read this indicator. It might look complicated at first, but the reward is to be able to anticipate and understand a lot more.
You can show/hide all the plots in the settings. So you can choose the way you prefer to use it.
----------------------------------------------------------------------------------------------------
FIRST WAY TO READ PreCy : The SIGNAL line
Go in the settings of PreCy, in "DISPLAY", uncheck "The pivot lines of the SIGNAL" and "The CYCLE areas". Make sure "The SIGNAL line" is checked.
The SIGNAL shows an estimation of the buying/selling pressure of each candle, going from 100 (100% bullish candle) to -100 (100% bearish candle). A doji would be shown close to zero.
Formula: Estimated % of buying pressure - Estimated % of selling pressure
It is a very choppy line in general, but its colors help make sense of it.
When this choppiness alternates between the extremes, then there is not much pressure on each candle, and it's very unpredictable.
When the pressure increases, the SIGNAL's amplitude changes. It "compresses", meaning there is some interest in the market. It can compress by alternating above and below zero, or it can stay above zero (bullish), or below zero (bearish) for a while.
When the SIGNAL becomes linear (in opposition to choppy), there is a lot of pressure, and it is directional. The participants agree for a move in a chosen direction.
The trajectory of the SIGNAL can help anticipate when a move is going to happen (directional increase of pressure), or stop (returning to zero) and possibly reverse (crossing zero).
Advanced uses:
The SIGNAL can make more sense on a specific timeframe, that would be aligned with the frequency of the orders at that moment. So it is a good idea to switch between timeframes until it gets less choppy, and more directional.
It is interesting to follow any regular progression of the SIGNAL, as it can reveal the intentions of the market makers to go in a certain direction discretely. There can be almost no volume and no move in the price action, yet the SIGNAL gets linear and moves away from one extreme, slowly crosses the zeroline, and pushes to the other extreme at the same time as the amplitude of the price action increases drastically.
----------------------------------------------------------------------------------------------------
SECOND WAY TO READ PreCy : The PIVOTS of the SIGNAL line
Go in the settings of PreCy, in "DISPLAY", and uncheck "The CYCLE areas". Make sure "The SIGNAL line" and "The pivot lines of the SIGNAL" are checked.
The PIVOTS help make sense of the apparent chaos of the SIGNAL. They can reveal the overall direction of the choppy moves.
Especially when the 2 PIVOTS lines are parallel and oriented.
----------------------------------------------------------------------------------------------------
THIRD WAY TO READ PreCy : The CYCLE
Go in the settings of PreCy, in "DISPLAY", and uncheck "The SIGNAL line" and "The pivot lines of the SIGNAL". Make sure "The CYCLE areas" is checked.
The CYCLE is a Moving Average of the SIGNAL in relation to each candle's size.
Formula: 6 periods Moving Average of the SIGNAL * (body of the current candle / 200 periods Moving Average of the candle's bodies)
The result goes from 200 to -200.
The CYCLE shows longer term indications of the pressures of the market.
Analysing the trajectory of the CYCLE can help predict the direction of the price.
When the CYCLE goes above or below the gray low intensity zone, it signals some interest in the move.
When the CYCLE stays above 100 or below -100, it is a sign of strength in the move.
When it stayed out of the gray low intensity zone, then returns inside it, it is a strong signal of a probable change of behavior.
----------------------------------------------------------------------------------------------------
ALERTS
In the settings, you can pick the alerts you're interested in.
To activate them, right click on the chart (or alt+a), choose "Add alert on Buying/Selling Pressure Cycle (PreCy)" then "Any alert()", then "Create".
Feel free to activate them on different timeframes. The alerts show which timeframe they are from (ex: "TF:15" for the 15 minutes TF).
I have added a lot more conditions to my PreCy, taken from FREMA Trend, for ex. You can do the same with your favorite scripts, to make PreCy more accurate for your style.
----------------------------------------------------------------------------------------------------
Borrowed scripts:
To estimate the buying and selling pressures, PreCy uses the wicks calculations of "Volume net histogram" by RafaelZioni
To filter the alerts, PreCy uses the calculations of "Amplitude" by Koholintian:
----------------------------------------------------------------------------------------------------
DO NOT BASE YOUR TRADING DECISIONS ON 1 SINGLE INDICATOR'S SIGNALS.
Always confirm your ideas by other means, like price action and indicators of a different nature.
Variance WindowsJust a quick trial at using statistical variance/standard deviation as an indicator. The general idea is that higher variance in the short term tends to indicate more volatility/movement. The other thing is that it can help set probabilistic boundaries for movements (e.g., if you set the bars to be 2 standard deviations, you are visualizing a range that denotes a 95% probability window).
I haven't really tried forming any sort of strategies around this indicator, but there are a few potential possibilities for its usability.
Generally speaking, the magnitude of the standard deviation (relative to the price) is small when the market is consolidating. It is larger when the market is trending up or own.
If the long term variance and the short-term variance are close to each other in scale, the trend is strong. Otherwise, the trend is weak. Note that I am only saying that the "trend" is strong , not that it is necessarily positive. this could be an up-trend, down-trend, or a sideways trend.
When the magnitudes of the variances are changing from very similar to very different (usually it's the long-term variance getting much larger than the short-term one), that's an indication that the previous trend is coming to an end.
Typically, it's the long-term variance that is bigger than the short-term. However, when you see them cross where the short-term is bigger or even much bigger than the long-term, it's indicative of a spike event (more often than not, one that is not favorable if you are holding any position on a given security).
Because you have probabilistic windows based on some n standard deviations from the midline (which in this version, I've used a ZLEMA as that midline), those boundaries could possibly be used to set stop-loss limits and the like.
There's nothing too complicated or deep about this particular indicator. All I'm really doing is assuming that we are dealing with a Gaussian random process. I am actually using EMA as my mean computation, even though for a proper Gaussian variance calculation, I should be using SMA. When I used SMA, though, it felt a lot more sensitive to noise, which made it feel less usable. In any case, it's just a simple first trial in many years after not having even looked at Pine Script to finally messing around with it again. Open to a litany of criticisms as I'm sure there will be many that are rightly deserved. Otherwise, happy scalping to thee.
5m Candle OverlayDescription:
The 5m Candle Overlay indicator is a powerful technical analysis tool designed to overlay 5-minute candles onto your chart. This indicator enables detailed analysis of price action within the 5-minute time frame, providing valuable insights into short-term market movements.
How it Works:
The 5m Candle Overlay indicator calculates the OHLC (Open, High, Low, Close) values specifically for the 5-minute time frame. By utilizing the request.security function, it retrieves the OHLC values for each 5-minute candle. The indicator then determines the color for each candle based on a comparison between the close and open prices. Bullish candles are assigned a green color with 75% opacity, while bearish candles are assigned a red color with 75% opacity. Additionally, the indicator checks if the current bar index is a multiple of 5 to prevent overlapping and enhance visualization.
Usage:
To effectively utilize the 5m Candle Overlay indicator, follow these steps:
1. Apply the 5m Candle Overlay indicator to your chart by adding it from the available indicators.
2. Observe the overlay of 5-minute candles on your chart, providing a detailed representation of price movements within the 5-minute time frame.
3. Interpret the candles:
- Bullish candles (green by default) indicate that the close price is higher than the open price, suggesting potential buying pressure.
- Bearish candles (red by default) indicate that the close price is lower than the open price, suggesting potential selling pressure.
4. Note that the indicator plots candles with a vertical offset every fifth indicator to prevent overlapping, ensuring clarity and ease of interpretation.
5. Combine the analysis of the 5-minute candles with other technical analysis tools, such as support and resistance levels, trend lines, or indicators from different time frames, to gain deeper insights and identify potential trade setups.
6. Implement appropriate risk management strategies, including setting stop-loss orders and position sizing, to effectively manage your trades within the 5-minute time frame and protect your capital.
Trend AngleIntroduction:
In today's post, we'll dive deep into the source code of a unique trading tool, the Trend Angle Indicator. The script is an indicator that calculates the trend angle for a given financial instrument. This powerful tool can help traders identify the strength and direction of a trend, allowing them to make informed decisions.
Overview of the Trend Angle Indicator:
The Trend Angle Indicator calculates the trend angle based on the slope of the price movement over a specified period. It uses an Exponential Moving Average (EMA) to smooth the data and an Epanechnikov kernel function for additional smoothing. The indicator provides a visual representation of the trend angle, making it easy to interpret for traders of all skill levels.
Let's break down the key components of the script:
Inputs:
Length: The number of periods to calculate the trend angle (default: 8)
Scale: A scaling factor for the ATR (Average True Range) calculation (default: 2)
Smoothing: The smoothing parameter for the Epanechnikov kernel function (default: 2)
Smoothing Factor: The radius of the Epanechnikov kernel function (default: 1)
Functions:
ema(): Exponential Moving Average calculation
atan2(): Arctangent function
degrees(): Conversion of radians to degrees
epanechnikov_kernel(): Epanechnikov kernel function for additional smoothing
Calculations:
atr: The EMA of the True Range
slope: The slope of the price movement over the given length
angle_rad: The angle of the slope in radians
degrees: The smoothed angle in degrees
Plotting:
Trend Angle: The trend angle, plotted as a line on the chart
Horizontal lines: 0, 90, and -90 degrees as reference points
How the Trend Angle Indicator Works:
The Trend Angle Indicator begins by calculating the Exponential Moving Average (EMA) of the True Range (TR) for a given financial instrument. This smooths the price data and provides a more accurate representation of the instrument's price movement.
Next, the indicator calculates the slope of the price movement over the specified length. This slope is then divided by the scaled ATR to normalize the trend angle based on the instrument's volatility. The angle is calculated using the atan2() function, which computes the arctangent of the slope.
The final step in the process is to smooth the trend angle using the Epanechnikov kernel function. This function provides additional smoothing to the trend angle, making it easier to interpret and reducing the impact of short-term price fluctuations.
Conclusion:
The Trend Angle Indicator is a powerful trading tool that allows traders to quickly and easily determine the strength and direction of a trend. By combining the Exponential Moving Average, ATR, and Epanechnikov kernel function, this indicator provides an accurate and easily interpretable representation of the trend angle. Whether you're an experienced trader or just starting, the Trend Angle Indicator can provide valuable insights into the market and help improve your trading decisions.
SuperTrend with Chebyshev FilterModified Super Trend with Chebyshev Filter
The Modified Super Trend is an innovative take on the classic Super Trend indicator. This advanced version incorporates a Chebyshev filter, which significantly enhances its capabilities by reducing false signals and improving overall signal quality. In this post, we'll dive deep into the Modified Super Trend, exploring its history, the benefits of the Chebyshev filter, and how it effectively addresses the challenges associated with smoothing, delay, and noise.
History of the Super Trend
The Super Trend indicator, developed by Olivier Seban, has been a popular tool among traders since its inception. It helps traders identify market trends and potential entry and exit points. The Super Trend uses average true range (ATR) and a multiplier to create a volatility-based trailing stop, providing traders with a dynamic tool that adapts to changing market conditions. However, the original Super Trend has its limitations, such as the tendency to produce false signals during periods of low volatility or sideways trading.
The Chebyshev Filter
The Chebyshev filter is a powerful mathematical tool that makes an excellent addition to the Super Trend indicator. It effectively addresses the issues of smoothing, delay, and noise associated with traditional moving averages. Chebyshev filters are named after Pafnuty Chebyshev, a renowned Russian mathematician who made significant contributions to the field of approximation theory.
The Chebyshev filter is capable of producing smoother, more responsive moving averages without introducing additional lag. This is possible because the filter minimizes the worst-case error between the ideal and the actual frequency response. There are two types of Chebyshev filters: Type I and Type II. Type I Chebyshev filters are designed to have an equiripple response in the passband, while Type II Chebyshev filters have an equiripple response in the stopband. The Modified Super Trend allows users to choose between these two types based on their preferences.
Overcoming the Challenges
The Modified Super Trend addresses several challenges associated with the original Super Trend:
Smoothing: The Chebyshev filter produces a smoother moving average without introducing additional lag. This feature is particularly beneficial during periods of low volatility or sideways trading, as it reduces the number of false signals.
Delay: The Chebyshev filter helps minimize the delay between price action and the generated signal, allowing traders to make timely decisions based on more accurate information.
Noise Reduction: The Chebyshev filter's ability to minimize the worst-case error between the ideal and actual frequency response reduces the impact of noise on the generated signals. This feature is especially useful when using the true range as an offset for the price, as it helps generate more reliable signals within a reasonable time frame.
The Great Replacement
The Modified Super Trend with Chebyshev filter is an excellent replacement for the original Super Trend indicator. It offers significant improvements in terms of signal quality, responsiveness, and accuracy. By incorporating the Chebyshev filter, the Modified Super Trend effectively reduces the number of false signals during low volatility or sideways trading, making it a more reliable tool for identifying market trends and potential entry and exit points.
In-Depth Guide to the Modified Super Trend Settings
The Modified Super Trend with Chebyshev filter offers a wide range of settings that allow traders to fine-tune the indicator to suit their specific trading styles and objectives. In this section, we will discuss each setting in detail, explaining its purpose and how to use it effectively.
Source
The source setting determines the price data used for calculations. The default setting is hl2, which calculates the average of the high and low prices. You can choose other price data sources such as close, open, or ohlc4 (average of open, high, low, and close prices) based on your preference.
Up Color and Down Color
These settings control the color of the trend line when the market is in an uptrend (up_color) and a downtrend (down_color). You can customize these colors to your liking, making it easier to visually identify the current market trend.
Text Color
This setting controls the color of the text displayed on the chart when using labels to indicate trend changes. You can choose any color that contrasts well with your chart background for better readability.
Mean Length
The mean_length setting determines the length (number of bars) used for the Chebyshev moving average calculation. A shorter length will make the moving average more responsive to price changes, while a longer length will produce a smoother moving average. It is crucial to find the right balance between responsiveness and smoothness, as a too-short length may generate false signals, while a too-long length might produce lagging signals. The default value is 64, but you can experiment with different values to find the optimal setting for your trading strategy.
Mean Ripple
The mean_ripple setting influences the Chebyshev filter's ripple effect in the passband (Type I) or stopband (Type II). The ripple effect represents small oscillations in the frequency response, which can impact the moving average's smoothness. The default value is 0.01, but you can experiment with different values to find the best balance between smoothness and responsiveness.
Chebyshev Type: Type I or Type II
The style setting allows you to choose between Type I and Type II Chebyshev filters. Type I filters have an equiripple response in the passband, while Type II filters have an equiripple response in the stopband. Depending on your preference for smoothness and responsiveness, you can choose the type that best fits your trading style.
ATR Style
The atr_style setting determines the method used for calculating the Average True Range (ATR). By default (false), it uses the traditional high-low range. When set to true, it uses the absolute difference between the open and close prices. You can choose the method that works best for your trading strategy and the market you are trading.
ATR Length
The atr_length setting controls the length (number of bars) used for calculating the ATR. Similar to the mean_length, a shorter length will make the ATR more responsive to price changes, while a longer length will produce a smoother ATR. The default value is 64, but you can experiment with different values to find the optimal setting for your trading strategy.
ATR Ripple
The atr_ripple setting, like the mean_ripple, influences the ripple effect of the Chebyshev filter used in the ATR calculation. The default value is 0.05, but you can experiment with different values to find the best balance between smoothness and responsiveness.
Multiplier
The multiplier setting determines the factor by which the ATR is multiplied before being added
Super Trend Logic and Signal Optimization
The Modified Super Trend with Chebyshev filter is designed to minimize false signals and provide a clear indication of market trends. It does so by using a combination of moving averages, Average True Range (ATR), and a multiplier. In this section, we will discuss the Super Trend's logic, its ability to prevent false signals, and the early warning crosses added to the indicator.
Super Trend Logic
The Super Trend's logic is based on a combination of the Chebyshev moving average and ATR. The Chebyshev moving average is a smooth moving average that effectively filters out market noise, while the ATR is a measure of market volatility.
The Super Trend is calculated by adding or subtracting a multiple of the ATR from the Chebyshev moving average. The multiplier is a user-defined value that determines the distance between the trend line and the price action. A larger multiplier results in a wider channel, reducing the likelihood of false signals but potentially missing out on valid trend changes.
Preventing False Signals
The Super Trend is designed to minimize false signals by maintaining its trend direction until a significant change in the market occurs. In a downtrend, the trend line will only decrease in value, and in an uptrend, it will only increase. This helps prevent false signals caused by temporary price fluctuations or market noise.
When the price crosses the trend line, the Super Trend does not immediately change its direction. Instead, it employs a safety logic to ensure that the trend change is genuine. The safety logic checks if the new trend line (calculated using the updated moving average and ATR) is more extreme than the previous one. If it is, the trend line is updated; otherwise, the previous trend line is maintained. This mechanism further reduces the likelihood of false signals by ensuring that the trend line only changes when there is a significant shift in the market.
Early Warning Crosses
To provide traders with additional insight, the Modified Super Trend with Chebyshev filter includes early warning crosses. These crosses are plotted on the chart when the price crosses the trend line without the safety logic. Although these crosses do not necessarily indicate a trend change, they can serve as a valuable heads-up for traders to monitor the market closely and prepare for potential trend reversals.
In conclusion, the Modified Super Trend with Chebyshev filter offers a significant improvement over the original Super Trend indicator. By incorporating the Chebyshev filter, this modified version effectively addresses the challenges of smoothing, delay, and noise reduction while minimizing false signals. The wide range of customizable settings allows traders to tailor the indicator to their specific needs, while the inclusion of early warning crosses provides valuable insight into potential trend reversals.
Ultimately, the Modified Super Trend with Chebyshev filter is an excellent tool for traders looking to enhance their trend identification and decision-making abilities. With its advanced features, this indicator can help traders navigate volatile markets with confidence, making more informed decisions based on accurate, timely information.
Stochastic Chebyshev Smoothed With Zero Lag SmoothingFast and Smooth Stochastic Oscillator with Zero Lag
Introduction
In this post, we will discuss a custom implementation of a Stochastic Oscillator that not only smooths the signal but also does so without introducing any noticeable lag. This is a remarkable achievement, as it allows for a fast Stochastic Oscillator that is less prone to false signals without being slow and sluggish.
We will go through the code step by step, explaining the various functions and the overall structure of the code.
First, let's start with a brief overview of the Stochastic Oscillator and the problem it addresses.
Background
The Stochastic Oscillator is a momentum indicator used in technical analysis to determine potential overbought or oversold conditions in an asset's price. It compares the closing price of an asset to its price range over a specified period. However, the Stochastic Oscillator is susceptible to false signals due to its sensitivity to price movements. This is where our custom implementation comes in, offering a smoother signal without noticeable lag, thus reducing the number of false signals.
Despite its popularity and widespread use in technical analysis, the Stochastic Oscillator has its share of drawbacks. While it is a price scaler that allows for easier comparisons across different assets and timeframes, it is also known for generating false signals, which can lead to poor trading decisions. In this section, we will delve deeper into the limitations of the Stochastic Oscillator and discuss the challenges associated with smoothing to mitigate its drawbacks.
Limitations of the Stochastic Oscillator
False Signals: The primary issue with the Stochastic Oscillator is its tendency to produce false signals. Since it is a momentum indicator, it reacts to short-term price movements, which can lead to frequent overbought and oversold signals that do not necessarily indicate a trend reversal. This can result in traders entering or exiting positions prematurely, incurring losses or missing out on potential gains.
Sensitivity to Market Noise: The Stochastic Oscillator is highly sensitive to market noise, which can create erratic signals in volatile markets. This sensitivity can make it difficult for traders to discern between genuine trend reversals and temporary fluctuations.
Lack of Predictive Power: Although the Stochastic Oscillator can help identify potential overbought and oversold conditions, it does not provide any information about the future direction or strength of a trend. As a result, it is often used in conjunction with other technical analysis tools to improve its predictive power.
Challenges of Smoothing the Stochastic Oscillator
To address the limitations of the Stochastic Oscillator, many traders attempt to smooth the indicator by applying various techniques. However, these approaches are not without their own set of challenges:
Trade-off between Smoothing and Responsiveness: The process of smoothing the Stochastic Oscillator inherently involves reducing its sensitivity to price movements. While this can help eliminate false signals, it can also result in a less responsive indicator, which may not react quickly enough to genuine trend reversals. This trade-off can make it challenging to find the optimal balance between smoothing and responsiveness.
Increased Complexity: Smoothing techniques often involve the use of additional mathematical functions and algorithms, which can increase the complexity of the indicator. This can make it more difficult for traders to understand and interpret the signals generated by the smoothed Stochastic Oscillator.
Lagging Signals: Some smoothing methods, such as moving averages, can introduce a time lag into the Stochastic Oscillator's signals. This can result in late entry or exit points, potentially reducing the profitability of a trading strategy based on the smoothed indicator.
Overfitting: In an attempt to eliminate false signals, traders may over-optimize their smoothing parameters, resulting in a Stochastic Oscillator that is overfitted to historical data. This can lead to poor performance in real-time trading, as the overfitted indicator may not accurately reflect the dynamics of the current market.
In our custom implementation of the Stochastic Oscillator, we used a combination of Chebyshev Type I Moving Average and zero-lag Gaussian-weighted moving average filters to address the indicator's limitations while preserving its responsiveness. In this section, we will discuss the reasons behind selecting these specific filters and the advantages of using the Chebyshev filter for our purpose.
Filter Selection
Chebyshev Type I Moving Average: The Chebyshev filter was chosen for its ability to provide a smoother signal without sacrificing much responsiveness. This filter is designed to minimize the maximum error between the original and the filtered signal within a specific frequency range, effectively reducing noise while preserving the overall shape of the signal. The Chebyshev Type I Moving Average achieves this by allowing a specified amount of ripple in the passband, resulting in a more aggressive filter roll-off and better noise reduction compared to other filters, such as the Butterworth filter.
Zero-lag Gaussian-weighted Moving Average: To further improve the Stochastic Oscillator's performance without introducing noticeable lag, we used the zero-lag Gaussian-weighted moving average (GWMA) filter. This filter combines the benefits of a Gaussian-weighted moving average, which prioritizes recent data points by assigning them higher weights, with a zero-lag approach that minimizes the time delay in the filtered signal. The result is a smoother signal that is less prone to false signals and is more responsive than traditional moving average filters.
Advantages of the Chebyshev Filter
Effective Noise Reduction: The primary advantage of the Chebyshev filter is its ability to effectively reduce noise in the Stochastic Oscillator signal. By minimizing the maximum error within a specified frequency range, the Chebyshev filter suppresses short-term fluctuations that can lead to false signals while preserving the overall trend.
Customizable Ripple Factor: The Chebyshev Type I Moving Average allows for a customizable ripple factor, enabling traders to fine-tune the filter's aggressiveness in reducing noise. This flexibility allows for better adaptability to different market conditions and trading styles.
Responsiveness: Despite its effective noise reduction, the Chebyshev filter remains relatively responsive compared to other smoothing filters. This responsiveness allows for more accurate detection of genuine trend reversals, making it a suitable choice for our custom Stochastic Oscillator implementation.
Compatibility with Zero-lag Techniques: The Chebyshev filter can be effectively combined with zero-lag techniques, such as the Gaussian-weighted moving average filter used in our custom implementation. This combination results in a Stochastic Oscillator that is both smooth and responsive, with minimal lag.
Code Overview
The code begins with defining custom mathematical functions for hyperbolic sine, cosine, and their inverse functions. These functions will be used later in the code for smoothing purposes.
Next, the gaussian_weight function is defined, which calculates the Gaussian weight for a given 'k' and 'smooth_per'. The zero_lag_gwma function calculates the zero-lag moving average with Gaussian weights. This function is used to create a Gaussian-weighted moving average with minimal lag.
The chebyshevI function is an implementation of the Chebyshev Type I Moving Average, which is used for smoothing the Stochastic Oscillator. This function takes the source value (src), length of the moving average (len), and the ripple factor (ripple) as input parameters.
The main part of the code starts by defining input parameters for K and D smoothing and ripple values. The Stochastic Oscillator is calculated using the ta.stoch function with Chebyshev smoothed inputs for close, high, and low. The result is further smoothed using the zero-lag Gaussian-weighted moving average function (zero_lag_gwma).
Finally, the lag variable is calculated using the Chebyshev Type I Moving Average for the Stochastic Oscillator. The Stochastic Oscillator and the lag variable are plotted on the chart, along with upper and lower bands at 80 and 20 levels, respectively. A fill is added between the upper and lower bands for better visualization.
Conclusion
The custom Stochastic Oscillator presented in this blog post combines the Chebyshev Type I Moving Average and zero-lag Gaussian-weighted moving average filters to provide a smooth and responsive signal without introducing noticeable lag. This innovative implementation results in a fast Stochastic Oscillator that is less prone to false signals, making it a valuable tool for technical analysts and traders alike.
However, it is crucial to recognize that the Stochastic Oscillator, despite being a price scaler, has its limitations, primarily due to its propensity for generating false signals. While smoothing techniques, like the ones used in our custom implementation, can help mitigate these issues, they often introduce new challenges, such as reduced responsiveness, increased complexity, lagging signals, and the risk of overfitting.
The selection of the Chebyshev Type I Moving Average and zero-lag Gaussian-weighted moving average filters was driven by their combined ability to provide a smooth and responsive signal while minimizing false signals. The advantages of the Chebyshev filter, such as effective noise reduction, customizable ripple factor, and responsiveness, make it an excellent fit for addressing the limitations of the Stochastic Oscillator.
When using the Stochastic Oscillator, traders should be aware of these limitations and challenges, and consider incorporating other technical analysis tools and techniques to supplement the indicator's signals. This can help improve the overall accuracy and effectiveness of their trading strategies, reducing the risk of losses due to false signals and other limitations associated with the Stochastic Oscillator.
Feel free to use, modify, or improve upon this custom Stochastic Oscillator code in your trading strategies. We hope this detailed walkthrough of the custom Stochastic Oscillator, its limitations, challenges, and filter selection has provided you with valuable insights and a better understanding of how it works. Happy trading!
Chebyshev type I and II FilterTitle: Chebyshev Type I and II Filters: Smoothing Techniques for Technical Analysis
Introduction:
In technical analysis, smoothing techniques are used to remove noise from a time series data. They help to identify trends and improve the readability of charts. One such powerful smoothing technique is the Chebyshev Type I and II Filters. In this post, we will dive deep into the Chebyshev filters, discuss their significance, and explain the differences between Type I and Type II filters.
Chebyshev Filters:
Chebyshev filters are a class of infinite impulse response (IIR) filters that are widely used in signal processing applications. They are known for their ability to provide a sharper cutoff between the passband and the stopband compared to other filter types, such as Butterworth filters. The Chebyshev filters are named after the Russian mathematician Pafnuty Chebyshev, who created the Chebyshev polynomials that form the basis for these filters.
The two main types of Chebyshev filters are:
1. Chebyshev Type I filters: These filters have an equiripple passband, which means they have equal and constant ripple within the passband. The advantage of Type I filters is that they usually provide a faster roll-off rate between the passband and the stopband compared to other filter types. However, the trade-off is that they may have larger ripples in the passband, resulting in a less smooth output.
2. Chebyshev Type II filters: These filters have an equiripple stopband, which means they have equal and constant ripple within the stopband. The advantage of Type II filters is that they provide a more controlled output by minimizing the ripple in the passband. However, this comes at the cost of a slower roll-off rate between the passband and the stopband compared to Type I filters.
Why Choose Chebyshev Filters for Smoothing?
Chebyshev filters are an excellent choice for smoothing in technical analysis due to their ability to provide a sharper transition between the passband and the stopband. This sharper transition helps in preserving the essential features of the underlying data while effectively removing noise. The two types of Chebyshev filters offer different trade-offs between the smoothness of the output and the roll-off rate, allowing users to choose the one that best suits their requirements.
Implementing Chebyshev Filters:
In the Pine Script language, we can implement the Chebyshev Type I and II filters using custom functions. We first define the custom hyperbolic functions cosh, acosh, sinh, and asinh, as well as the inverse tangent function atan. These functions are essential for calculating the filter coefficients.
Next, we create two separate functions for the Chebyshev Type I and II filters, named chebyshevI and chebyshevII, respectively. Each function takes three input parameters: the source data (src), the filter length (len), and the ripple value (ripple). The ripple value determines the amount of ripple in the passband for Type I filters and in the stopband for Type II filters. A higher ripple value results in a faster roll-off rate but may lead to a less smooth output.
Finally, we create a main function called chebyshev, which takes an additional boolean input parameter named style. If the style parameter is set to false, the function calculates the Chebyshev Type I filter using the chebyshevI function. If the style parameter is set to true, the function calculates the Chebyshev Type II filter using the chebyshevII function.
By adjusting the input parameters, users can choose the type of Chebyshev filter and configure its characteristics to suit their needs.
Conclusion:
The Chebyshev Type I and II filters are powerful smoothing techniques that can be used in technical analysis to remove noise from time series data. They offer a sharper transition between the passband and the stopband compared to other filter types, which helps in preserving the essential features of the data while effectively reducing noise. By implementing these filters in Pine Script, traders can easily integrate them into their trading strategies and improve the readability of their charts.
Endpointed SSA of Price [Loxx]The Endpointed SSA of Price: A Comprehensive Tool for Market Analysis and Decision-Making
The financial markets present sophisticated challenges for traders and investors as they navigate the complexities of market behavior. To effectively interpret and capitalize on these complexities, it is crucial to employ powerful analytical tools that can reveal hidden patterns and trends. One such tool is the Endpointed SSA of Price, which combines the strengths of Caterpillar Singular Spectrum Analysis, a sophisticated time series decomposition method, with insights from the fields of economics, artificial intelligence, and machine learning.
The Endpointed SSA of Price has its roots in the interdisciplinary fusion of mathematical techniques, economic understanding, and advancements in artificial intelligence. This unique combination allows for a versatile and reliable tool that can aid traders and investors in making informed decisions based on comprehensive market analysis.
The Endpointed SSA of Price is not only valuable for experienced traders but also serves as a useful resource for those new to the financial markets. By providing a deeper understanding of market forces, this innovative indicator equips users with the knowledge and confidence to better assess risks and opportunities in their financial pursuits.
█ Exploring Caterpillar SSA: Applications in AI, Machine Learning, and Finance
Caterpillar SSA (Singular Spectrum Analysis) is a non-parametric method for time series analysis and signal processing. It is based on a combination of principles from classical time series analysis, multivariate statistics, and the theory of random processes. The method was initially developed in the early 1990s by a group of Russian mathematicians, including Golyandina, Nekrutkin, and Zhigljavsky.
Background Information:
SSA is an advanced technique for decomposing time series data into a sum of interpretable components, such as trend, seasonality, and noise. This decomposition allows for a better understanding of the underlying structure of the data and facilitates forecasting, smoothing, and anomaly detection. Caterpillar SSA is a particular implementation of SSA that has proven to be computationally efficient and effective for handling large datasets.
Uses in AI and Machine Learning:
In recent years, Caterpillar SSA has found applications in various fields of artificial intelligence (AI) and machine learning. Some of these applications include:
1. Feature extraction: Caterpillar SSA can be used to extract meaningful features from time series data, which can then serve as inputs for machine learning models. These features can help improve the performance of various models, such as regression, classification, and clustering algorithms.
2. Dimensionality reduction: Caterpillar SSA can be employed as a dimensionality reduction technique, similar to Principal Component Analysis (PCA). It helps identify the most significant components of a high-dimensional dataset, reducing the computational complexity and mitigating the "curse of dimensionality" in machine learning tasks.
3. Anomaly detection: The decomposition of a time series into interpretable components through Caterpillar SSA can help in identifying unusual patterns or outliers in the data. Machine learning models trained on these decomposed components can detect anomalies more effectively, as the noise component is separated from the signal.
4. Forecasting: Caterpillar SSA has been used in combination with machine learning techniques, such as neural networks, to improve forecasting accuracy. By decomposing a time series into its underlying components, machine learning models can better capture the trends and seasonality in the data, resulting in more accurate predictions.
Application in Financial Markets and Economics:
Caterpillar SSA has been employed in various domains within financial markets and economics. Some notable applications include:
1. Stock price analysis: Caterpillar SSA can be used to analyze and forecast stock prices by decomposing them into trend, seasonal, and noise components. This decomposition can help traders and investors better understand market dynamics, detect potential turning points, and make more informed decisions.
2. Economic indicators: Caterpillar SSA has been used to analyze and forecast economic indicators, such as GDP, inflation, and unemployment rates. By decomposing these time series, researchers can better understand the underlying factors driving economic fluctuations and develop more accurate forecasting models.
3. Portfolio optimization: By applying Caterpillar SSA to financial time series data, portfolio managers can better understand the relationships between different assets and make more informed decisions regarding asset allocation and risk management.
Application in the Indicator:
In the given indicator, Caterpillar SSA is applied to a financial time series (price data) to smooth the series and detect significant trends or turning points. The method is used to decompose the price data into a set number of components, which are then combined to generate a smoothed signal. This signal can help traders and investors identify potential entry and exit points for their trades.
The indicator applies the Caterpillar SSA method by first constructing the trajectory matrix using the price data, then computing the singular value decomposition (SVD) of the matrix, and finally reconstructing the time series using a selected number of components. The reconstructed series serves as a smoothed version of the original price data, highlighting significant trends and turning points. The indicator can be customized by adjusting the lag, number of computations, and number of components used in the reconstruction process. By fine-tuning these parameters, traders and investors can optimize the indicator to better match their specific trading style and risk tolerance.
Caterpillar SSA is versatile and can be applied to various types of financial instruments, such as stocks, bonds, commodities, and currencies. It can also be combined with other technical analysis tools or indicators to create a comprehensive trading system. For example, a trader might use Caterpillar SSA to identify the primary trend in a market and then employ additional indicators, such as moving averages or RSI, to confirm the trend and generate trading signals.
In summary, Caterpillar SSA is a powerful time series analysis technique that has found applications in AI and machine learning, as well as financial markets and economics. By decomposing a time series into interpretable components, Caterpillar SSA enables better understanding of the underlying structure of the data, facilitating forecasting, smoothing, and anomaly detection. In the context of financial trading, the technique is used to analyze price data, detect significant trends or turning points, and inform trading decisions.
█ Input Parameters
This indicator takes several inputs that affect its signal output. These inputs can be classified into three categories: Basic Settings, UI Options, and Computation Parameters.
Source: This input represents the source of price data, which is typically the closing price of an asset. The user can select other price data, such as opening price, high price, or low price. The selected price data is then utilized in the Caterpillar SSA calculation process.
Lag: The lag input determines the window size used for the time series decomposition. A higher lag value implies that the SSA algorithm will consider a longer range of historical data when extracting the underlying trend and components. This parameter is crucial, as it directly impacts the resulting smoothed series and the quality of extracted components.
Number of Computations: This input, denoted as 'ncomp,' specifies the number of eigencomponents to be considered in the reconstruction of the time series. A smaller value results in a smoother output signal, while a higher value retains more details in the series, potentially capturing short-term fluctuations.
SSA Period Normalization: This input is used to normalize the SSA period, which adjusts the significance of each eigencomponent to the overall signal. It helps in making the algorithm adaptive to different timeframes and market conditions.
Number of Bars: This input specifies the number of bars to be processed by the algorithm. It controls the range of data used for calculations and directly affects the computation time and the output signal.
Number of Bars to Render: This input sets the number of bars to be plotted on the chart. A higher value slows down the computation but provides a more comprehensive view of the indicator's performance over a longer period. This value controls how far back the indicator is rendered.
Color bars: This boolean input determines whether the bars should be colored according to the signal's direction. If set to true, the bars are colored using the defined colors, which visually indicate the trend direction.
Show signals: This boolean input controls the display of buy and sell signals on the chart. If set to true, the indicator plots shapes (triangles) to represent long and short trade signals.
Static Computation Parameters:
The indicator also includes several internal parameters that affect the Caterpillar SSA algorithm, such as Maxncomp, MaxLag, and MaxArrayLength. These parameters set the maximum allowed values for the number of computations, the lag, and the array length, ensuring that the calculations remain within reasonable limits and do not consume excessive computational resources.
█ A Note on Endpionted, Non-repainting Indicators
An endpointed indicator is one that does not recalculate or repaint its past values based on new incoming data. In other words, the indicator's previous signals remain the same even as new price data is added. This is an important feature because it ensures that the signals generated by the indicator are reliable and accurate, even after the fact.
When an indicator is non-repainting or endpointed, it means that the trader can have confidence in the signals being generated, knowing that they will not change as new data comes in. This allows traders to make informed decisions based on historical signals, without the fear of the signals being invalidated in the future.
In the case of the Endpointed SSA of Price, this non-repainting property is particularly valuable because it allows traders to identify trend changes and reversals with a high degree of accuracy, which can be used to inform trading decisions. This can be especially important in volatile markets where quick decisions need to be made.
Stophunt WickAcknowledgement
This indicator is dedicated to my friend Alexandru who saved me from one of these liquidation raids which almost liquidated me.
Alexandru is one of the best scalpers out there and he always nails his entries at the tip of these wicks.
This inspired me to create this indicator.
What's a Liquidation Wick?
It's that fast stop-hunting wick that stophunts everyone by triggering their stop-loss and liquidation.
Liquidity is the lifeblood of stock market and liquidation is the process that moves price.
This indicator will identify when a liquidity pool is getting raided to trigger buy or sell stops, they are also know as stop-hunts.
How does it work?
When market consolidates in one direction, it builds up liquidity zones.
Market maker will break out of these consolidation phases by having dramatic price action to either pump or dump to raid these liquidity zones.
This is also called stop-hunts or liquidity raids. After that it will start reversing back to the opposite direction.
This is most noticeable by the length of the wick of a given candle in a very short amount of time and the total size of the candle.
This indicator highlights them accordingly.
Settings
Wick and Candle ratio works with default values but finetune will enhance user experience and usability.
Wick Ratio: Size of the wick compared to body of a candle.
Adjust this to higher ratio on smaller timeframe or smaller ratio on bigger timeframe to your trading style to spot a trend reversal.
Candle Ratio: The size of the candle, by default it is 0.75% of the current price.
For example, if BTC is at 20,000 then the size of the candle has to be minimum 150.
This can be fine tuned to bigger candle size on higher time frames or smaller for shorter timeframe depending on the trade type.
How to use it?
This indicator will identify when a liquidity pool is getting raided to trigger buy or sell stops, they are also know as stop-hunts. It can be used of its own for scalping but there are also a good few indicators which would most definitely help to confluence bigger timeframe trades.
Scalp
This indicator shows the most chaotic moments in price action; therefore it works best on smaller timeframes, ideally 3 or 5 minute candle.
- Wait for the market to start pumping or dumping.
- Current candle will change colour (Bullish/Bearish).
- Enter trade as soon as price starts to reverse back.
- Place the stop-loss outside of the current candle.
- Wait for the Liquidation Wick to appear as confirmation.
Price is very chaotic during a liquidity stop-hunt raid but there is a saying:
"In the midst of chaos, there is also opportunity" - Sun-Tzu
Since this is a very high risk, high reward strategy; it is advised to practice on paper trade first.
Practice until perfection and this indicator would be the perfect bread and butter scalp confirmation.
Fair Value Gap
FVG strategy is the most accurate in conjunction with this indicator.
Normally price would reverse after consuming fair value gaps but often it's difficult to know when and where.
This indicator would identify those crucial entry points for reverse course direction of the price action.
Support and Resistance
This indicator can also be used in conjunction with support and resistance lines.
Generally the stophunt will go deep below the support or spike much further up the resistance lines to liquidate positions.
Bollinger Bands
Bolling Bands strategy would be to wait until the price breaks out of the band.
Once the wick is formed, it would be an ideal entry point.
Script change
This is an open-source script and feel free to modify according to your need and to amplify your existing strategy.
Bar Move Probability Price Levels (BMPPL)Hello fellow traders! I am thrilled to present my latest creation, the Bar Move Probability Price Levels (BMPPL) indicator. This powerful tool offers a statistical edge in your trading by helping you understand the likelihood of price movements at multiple levels based on historical data. In this post, I'll provide an overview of the indicator, its features, and how it can enhance your trading experience. Let's dive in!
What is the Bar Move Probability Price Levels Indicator?
The Bar Move Probability Price Levels (BMPPL) indicator is a versatile tool that calculates the probability of a bar's price movement at multiple levels, either up or down, based on past occurrences of similar price movements. This comprehensive approach can provide valuable insights into the potential direction of the market, allowing you to make better-informed trading decisions.
One of the standout features of the BMPPL indicator is its flexibility. You can choose to see the probabilities of reaching various price levels, or you can focus on the highest probability move by adjusting the "Max Number of Elements" and "Step Size" settings. This flexibility ensures that the indicator caters to your specific trading style and requirements.
Max Number of Elements and Step Size: Fine-Tuning Your BMPPL Indicator
The BMPPL indicator allows you to customize its output to suit your trading style and requirements through two key settings: Max Number of Elements and Step Size.
Max Number of Elements: This setting determines the maximum number of price levels displayed by the indicator. By default, it is set to 1000, meaning the indicator will show probabilities for up to 1000 price levels. You can adjust this setting to limit the number of price levels displayed, depending on your preference and trading strategy.
Step Size: The Step Size setting determines the increment between displayed price levels. By default, it is set to 100, which means the indicator will display probabilities for every 100th price level. Adjusting the Step Size allows you to control the granularity of the displayed probabilities, enabling you to focus on specific price movements.
By adjusting the Max Number of Elements and Step Size settings, you can fine-tune the BMPPL indicator to focus on the most relevant price levels for your trading strategy. For example, if you want to concentrate on the highest probability move, you can set the Max Number of Elements to 1 and the Step Size to 1. This will cause the indicator to display only the price level with the highest probability, simplifying your trading decisions.
Probability Calculation: Understanding the Core Concept
The BMPPL indicator calculates the probability of a bar's price movement by analyzing historical price changes and comparing them to the current price change (in percentage). The indicator maintains separate arrays for green (bullish) and red (bearish) price movements and their corresponding counts.
When a new bar is formed, the indicator checks whether the price movement (in percentage) is already present in the respective array. If it is, the corresponding count is updated. Otherwise, a new entry is added to the array, with an initial count of 1.
Once the historical data has been analyzed, the BMPPL indicator calculates the probability of each price movement by dividing the count of each movement by the sum of all counts. These probabilities are then stored in separate arrays for green and red movements.
Utilizing BMPPL Indicator Settings Effectively
To make the most of the BMPPL indicator, it's essential to understand how to use the Max Number of Elements and Step Size settings effectively:
Identify your trading objectives: Before adjusting the settings, it's crucial to know what you want to achieve with your trades. Are you targeting specific price levels or focusing on high-probability moves? Identifying your objectives will help you determine the appropriate settings.
Start with the default settings: The default settings provide a broad overview of price movement probabilities. Start by analyzing these settings to gain a general understanding of the market behavior.
Adjust the settings according to your objectives: Once you have a clear understanding of your trading objectives, adjust the Max Number of Elements and Step Size settings accordingly. For example, if you want to focus on the highest probability move, set both settings to 1.
Experiment and refine: As you gain experience with the BMPPL indicator, continue to experiment with different combinations of Max Number of Elements and Step Size settings. This will help you find the optimal configuration that aligns with your trading strategy and risk tolerance. Remember to continually evaluate your trading results and refine your settings as needed.
Combine with other technical analysis tools: While the BMPPL indicator provides valuable insights on its own, combining it with other technical analysis tools can further enhance your trading strategy. Use additional indicators and chart patterns to confirm your analysis and improve the accuracy of your trades.
Monitor and adjust: Market conditions are constantly changing, and it's crucial to stay adaptive. Keep monitoring the market and adjust your BMPPL settings as necessary to ensure they remain relevant and effective in the current market environment.
By understanding and effectively utilizing the Max Number of Elements and Step Size settings in the BMPPL indicator, you can gain a deeper insight into the potential direction of the market, allowing you to make more informed trading decisions. Experimenting with different settings and combining the BMPPL indicator with other technical analysis tools will ultimately help you develop a robust trading strategy that maximizes your potential profits.
How Can the BMPPL Indicator Benefit Your Trading?
The primary benefit of the BMPPL indicator is its ability to provide you with a statistical edge in your trading by displaying probabilities for various price movements. By analyzing historical price data, the indicator helps you understand the likelihood of certain price movements occurring, allowing you to make more informed decisions about your trades.
The customizable nature of the BMPPL indicator makes it a valuable tool for traders with specific price targets or risk management strategies in mind. By understanding the probability of reaching your target price or the likelihood of encountering a significant price movement, you can better manage your risk and optimize your trading strategy.
Additionally, the BMPPL indicator can be used in conjunction with other technical analysis tools and indicators to further strengthen your trading strategy. For example, you can combine the BMPPL indicator with support and resistance levels, trend lines, and moving averages to better time your entries and exits.
Wrapping Up
In conclusion, the Bar Move Probability Price Levels (BMPPL) indicator is a powerful and customizable tool that can help you gain a statistical edge in your trading. By analyzing historical price data and displaying probabilities for various price movements, the BMPPL indicator allows you to make more informed decisions about your trades, ultimately leading to more successful outcomes.
The customizable settings of the BMPPL indicator make it an adaptable tool for traders with diverse trading styles and risk management preferences. With its ability to provide valuable insights into the potential direction of the market, the BMPPL indicator is an essential addition to any trader's toolbox.
Moreover, when combined with other technical analysis tools and indicators, the BMPPL indicator can further enhance your trading strategy, allowing you to better time your entries and exits and maximize your potential profits. So, if you're looking to gain an edge in your trading and improve your decision-making process, the Bar Move Probability Price Levels (BMPPL) indicator is definitely worth exploring.
Adaptive Candlestick Pattern Recognition System█ INTRODUCTION
Nearly three years in the making, intermittently worked on in the few spare hours of weekends and time off, this is a passion project I undertook to flesh out my skills as a computer programmer. This script currently recognizes 85 different candlestick patterns ranging from one to five candles in length. It also performs statistical analysis on those patterns to determine prior performance and changes the coloration of those patterns based on that performance. In searching TradingView's script library for scripts similar to this one, I had found a handful. However, when I reviewed the ones which were open source, I did not see many that truly captured the power of PineScrypt or leveraged the way it works to create efficient and reliable code; one of the main driving factors for releasing this 5,000+ line behemoth open sourced.
Please take the time to review this description and source code to utilize this script to its fullest potential.
█ CONCEPTS
This script covers the following topics: Candlestick Theory, Trend Direction, Higher Timeframes, Price Analysis, Statistic Analysis, and Code Design.
Candlestick Theory - This script focuses solely on the concept of Candlestick Theory: arrangements of candlesticks may form certain patterns that can potentially influence the future price action of assets which experience those patterns. A full list of patterns (grouped by pattern length) will be in its own section of this description. This script contains two modes of operation for identifying candlestick patterns, 'CLASSIC' and 'BREAKOUT'.
CLASSIC: In this mode, candlestick patterns will be identified whenever they appear. The user has a wide variety of inputs to manipulate that can change how certain patterns are identified and even enable alerts to notify themselves when these patterns appear. Each pattern selected to appear will have their Profit or Loss (P/L) calculated starting from the first candle open succeeding the pattern to a candle close specified some number of candles ahead. These P/L calculations are then collected for each pattern, and split among partitions of prior price action of the asset the script is currently applied to (more on that in Higher Timeframes ).
BREAKOUT: In this mode, P/L calculations are held off until a breakout direction has been confirmed. The user may specify the number of candles ahead of a pattern's appearance (from one to five) that a pattern has to confirm a breakout in either an upward or downward direction. A breakout is constituted when there is a candle following the appearance of the pattern that closes above/at the highest high of the pattern, or below/at its lowest low. Only then will percent return calculations be performed for the pattern that's been identified, and these percent returns are broken up not only by the partition they had appeared in but also by the breakout direction itself. Patterns which do not breakout in either direction will be ignored, along with having their labels deleted.
In both of these modes, patterns may be overridden. Overrides occur when a smaller pattern has been detected and ends up becoming one (or more) of the candles of a larger pattern. A key example of this would be the Bearish Engulfing and the Three Outside Down patterns. A Three Outside Down necessitates a Bearish Engulfing as the first two candles in it, while the third candle closes lower. When a pattern is overridden, the return for that pattern will no longer be tracked. Overrides will not occur if the tail end of a larger pattern occurs at the beginning of a smaller pattern (Ex: a Bullish Engulfing occurs on the third candle of a Three Outside Down and the candle immediately following that pattern, the Three Outside Down pattern will not be overridden).
Important Functionality Note: These patterns are only searched for at the most recently closed candle, not on the currently closing candle, which creates an offset of one for this script's execution. (SEE LIMITATIONS)
Trend Direction - Many of the patterns require a trend direction prior to their appearance. Noting TradingView's own publication of candlestick patterns, I utilize a similar method for determining trend direction. Moving Averages are used to determine which trend is currently taking place for candlestick patterns to be sought out. The user has access to two Moving Averages which they may individually modify the following for each: Moving Average type (list of 9), their length, width, source values, and all variables associated with two special Moving Averages (Least Squares and Arnaud Legoux).
There are 3 settings for these Moving Averages, the first two switch between the two Moving Averages, and the third uses both. When using individual Moving Averages, the user may select a 'price point' to compare against the Moving Average (default is close). This price point is compared to the Moving Average at the candles prior to the appearance of candle patterns. Meaning: The close compared to the Moving Average two candles behind determines the trend direction used for Candlestick Analysis of one candle patterns; three candles behind for two candle patterns and so on. If the selected price point is above the Moving Average, then the current trend is an 'uptrend', 'downtrend' otherwise.
The third setting using both Moving Averages will compare the lengths of each, and trend direction is determined by the shorter Moving Average compared to the longer one. If the shorter Moving Average is above the longer, then the current trend is an 'uptrend', 'downtrend' otherwise. If the lengths of the Moving Averages are the same, or both Moving Averages are Symmetrical, then MA1 will be used by default. (SEE LIMITATIONS)
Higher Timeframes - This script employs the use of Higher Timeframes with a few request.security calls. The purpose of these calls is strictly for the partitioning of an asset's chart, splitting the returns of patterns into three separate groups. The four inputs in control of this partitioning split the chart based on: A given resolution to grab values from, the length of time in that resolution, and 'Upper' and 'Lower Limits' which split the trading range provided by that length of time in that resolution that forms three separate groups. The default values for these four inputs will partition the current chart by the yearly high-low range where: the 'Upper' partition is the top 20% of that trading range, the 'Middle' partition is 80% to 33% of the trading range, and the 'Lower' partition covers the trading range within 33% of the yearly low.
Patterns which are identified by this script will have their returns grouped together based on which partition they had appeared in. For example, a Bullish Engulfing which occurs within a third of the yearly low will have its return placed separately from a Bullish Engulfing that occurred within 20% of the yearly high. The idea is that certain patterns may perform better or worse depending on when they had occurred during an asset's trading range.
Price Analysis - Price Analysis is a major part of this script's functionality as it can fundamentally change how patterns are shown to the user. The settings related to Price Analysis include setting the number of candles ahead of a pattern's appearance to determine the return of that pattern. In 'BREAKOUT' mode, an additional setting allows the user to specify where the P/L calculation will begin for a pattern that had appeared and confirmed. (SEE LIMITATIONS)
The calculation for percent returns of patterns is illustrated with the following pseudo-code (CLASSIC mode, this is a simplified version of the actual code):
type patternObj
int ID
int partition
type returnsArray
float returns
// No pattern found = na returned
patternObj TEST_VAL = f_FindPattern()
priorTestVal = TEST_VAL
if not na( priorTestVal )
pnlMatrixRow = priorTestVal.ID
pnlMatrixCol = priorTestVal.partition
matrixReturn = matrix.get(PERCENT_RETURNS, pnlMatrixRow, pnlMatrixCol)
percentReturn = ( (close - open ) / open ) * 100%
array.push(matrixReturn.returns, percentReturn)
Statistic Analysis - This script uses Pine's built-in array functions to conduct the Statistic Analysis for patterns. When a pattern is found and its P/L calculation is complete, its return is added to a 'Return Array' User-Defined-Type that contains numerous fields which retain information on a pattern's prior performance. The actual UDT is as follows:
type returnArray
float returns = na
int size = 0
float avg = 0
float median = 0
float stdDev = 0
int polarities = na
All values within this UDT will be updated when a return is added to it (some based on user input). The array.avg , array.median and array.stdev will be ran and saved into their respective fields after a return is placed in the 'returns' array. The 'polarities' integer array is what will be changed based on user input. The user specifies two different percentages that declare 'Positive' and 'Negative' returns for patterns. When a pattern returns above, below, or in between these two values, different indices of this array will be incremented to reflect the kind of return that pattern had just experienced.
These values (plus the full name, partition the pattern occurred in, and a 95% confidence interval of expected returns) will be displayed to the user on the tooltip of the labels that identify patterns. Simply scroll over the pattern label to view each of these values.
Code Design - Overall this script is as much of an art piece as it is functional. Its design features numerous depictions of ASCII Art that illustrate what is being attempted by the functions that identify patterns, and an incalculable amount of time was spent rewriting portions of code to improve its efficiency. Admittedly, this final version is nearly 1,000 lines shorter than a previous version (one which took nearly 30 seconds after compilation to run, and didn't do nearly half of what this version does). The use of UDTs, especially the 'patternObj' one crafted and redesigned from the Hikkake Hunter 2.0 I published last month, played a significant role in making this script run efficiently. There is a slight rigidity in some of this code mainly around pattern IDs which are responsible for displaying the abbreviation for patterns (as well as the full names under the tooltips, and the matrix row position for holding returns), as each is hard-coded to correspond to that pattern.
However, one thing I would like to mention is the extensive use of global variables for pattern detection. Many scripts I had looked over for ideas on how to identify candlestick patterns had the same idea; break the pattern into a set of logical 'true/false' statements derived from historically referencing candle OHLC values. Some scripts which identified upwards of 20 to 30 patterns would reference Pine's built-in OHLC values for each pattern individually, potentially requesting information from TradingView's servers numerous times that could easily be saved into a variable for re-use and only requested once per candle (what this script does).
█ FEATURES
This script features a massive amount of switches, options, floating point values, detection settings, and methods for identifying/tailoring pattern appearances. All modifiable inputs for patterns are grouped together based on the number of candles they contain. Other inputs (like those for statistics settings and coloration) are grouped separately and presented in a way I believe makes the most sense.
Not mentioned above is the coloration settings. One of the aims of this script was to make patterns visually signify their behavior to the user when they are identified. Each pattern has its own collection of returns which are analyzed and compared to the inputs of the user. The user may choose the colors for bullish, neutral, and bearish patterns. They may also choose the minimum number of patterns needed to occur before assigning a color to that pattern based on its behavior; a color for patterns that have not met this minimum number of occurrences yet, and a color for patterns that are still processing in BREAKOUT mode.
There are also an additional three settings which alter the color scheme for patterns: Statistic Point-of-Reference, Adaptive coloring, and Hard Limiting. The Statistic Point-of-Reference decides which value (average or median) will be compared against the 'Negative' and 'Positive Return Tolerance'(s) to guide the coloration of the patterns (or for Adaptive Coloring, the generation of a color gradient).
Adaptive Coloring will have this script produce a gradient that patterns will be colored along. The more bullish or bearish a pattern is, the further along the gradient those patterns will be colored starting from the 'Neutral' color (hard lined at the value of 0%: values above this will be colored bullish, bearish otherwise). When Adaptive Coloring is enabled, this script will request the highest and lowest values (these being the Statistic Point-of-Reference) from the matrix containing all returns and rewrite global variables tied to the negative and positive return tolerances. This means that all patterns identified will be compared with each other to determine bullish/bearishness in Adaptive Coloring.
Hard Limiting will prevent these global variables from being rewritten, so patterns whose Statistic Point-of-Reference exceed the return tolerances will be fully colored the bullish or bearish colors instead of a generated gradient color. (SEE LIMITATIONS)
Apart from the Candle Detection Modes (CLASSIC and BREAKOUT), there's an additional two inputs which modify how this script behaves grouped under a "MASTER DETECTION SETTINGS" tab. These two "Pattern Detection Settings" are 'SWITCHBOARD' and 'TARGET MODE'.
SWITCHBOARD: Every single pattern has a switch that is associated with its detection. When a switch is enabled, the code which searches for that pattern will be run. With the Pattern Detection Setting set to this, all patterns that have their switches enabled will be sought out and shown.
TARGET MODE: There is an additional setting which operates on top of 'SWITCHBOARD' that singles out an individual pattern the user specifies through a drop down list. The names of every pattern recognized by this script will be present along with an identifier that shows the number of candles in that pattern (Ex: " (# candles)"). All patterns enabled in the switchboard will still have their returns measured, but only the pattern selected from the "Target Pattern" list will be shown. (SEE LIMITATIONS)
The vast majority of other features are held in the one, two, and three candle pattern sections.
For one-candle patterns, there are:
3 — Settings related to defining 'Tall' candles:
The number of candles to sample for previous candle-size averages.
The type of comparison done for 'Tall' Candles: Settings are 'RANGE' and 'BODY'.
The 'Tolerance' for tall candles, specifying what percent of the 'average' size candles must exceed to be considered 'Tall'.
When 'Tall Candle Setting' is set to RANGE, the high-low ranges are what the current candle range will be compared against to determine if a candle is 'Tall'. Otherwise the candle bodies (absolute value of the close - open) will be compared instead. (SEE LIMITATIONS)
Hammer Tolerance - How large a 'discarded wick' may be before it disqualifies a candle from being a 'Hammer'.
Discarded wicks are compared to the size of the Hammer's candle body and are dependent upon the body's center position. Hammer bodies closer to the high of the candle will have the upper wick used as its 'discarded wick', otherwise the lower wick is used.
9 — Doji Settings, some pulled from an old Doji Hunter I made a while back:
Doji Tolerance - How large the body of a candle may be compared to the range to be considered a 'Doji'.
Ignore N/S Dojis - Turns off Trend Direction for non-special Dojis.
GS/DF Doji Settings - 2 Inputs that enable and specify how large wicks that typically disqualify Dojis from being 'Gravestone' or 'Dragonfly' Dojis may be.
4 Settings related to 'Long Wick Doji' candles detailed below.
A Tolerance for 'Rickshaw Man' Dojis specifying how close the center of the body must be to the range to be valid.
The 4 settings the user may modify for 'Long Legged' Dojis are: A Sample Base for determining the previous average of wicks, a Sample Length specifying how far back to look for these averages, a Behavior Setting to define how 'Long Legged' Dojis are recognized, and a tolerance to specify how large in comparison to the prior wicks a Doji's wicks must be to be considered 'Long Legged'.
The 'Sample Base' list has two settings:
RANGE: The wicks of prior candles are compared to their candle ranges and the 'wick averages' will be what the average percent of ranges were in the sample.
WICKS: The size of the wicks themselves are averaged and returned for comparing against the current wicks of a Doji.
The 'Behavior' list has three settings:
ONE: Only one wick length needs to exceed the average by the tolerance for a Doji to be considered 'Long Legged'.
BOTH: Both wick lengths need to exceed the average of the tolerance of their respective wicks (upper wicks are compared to upper wicks, lower wicks compared to lower) to be considered 'Long Legged'.
AVG: Both wicks and the averages of the previous wicks are added together, divided by two, and compared. If the 'average' of the current wicks exceeds this combined average of prior wicks by the tolerance, then this would constitute a valid 'Long Legged' Doji. (For Dojis in general - SEE LIMITATIONS)
The final input is one related to candle patterns which require a Marubozu candle in them. The two settings for this input are 'INCLUSIVE' and 'EXCLUSIVE'. If INCLUSIVE is selected, any opening/closing variant of Marubozu candles will be allowed in the patterns that require them.
For two-candle patterns, there are:
2 — Settings which define 'Engulfing' parameters:
Engulfing Setting - Two options, RANGE or BODY which sets up how one candle may 'engulf' the previous.
Inclusive Engulfing - Boolean which enables if 'engulfing' candles can be equal to the values needed to 'engulf' the prior candle.
For the 'Engulfing Setting':
RANGE: If the second candle's high-low range completely covers the high-low range of the prior candle, this is recognized as 'engulfing'.
BODY: If the second candle's open-close completely covers the open-close of the previous candle, this is recognized as 'engulfing'. (SEE LIMITATIONS)
4 — Booleans specifying different settings for a few patterns:
One which allows for 'opens within body' patterns to let the second candle's open/close values match the prior candles' open/close.
One which forces 'Kicking' patterns to have a gap if the Marubozu setting is set to 'INCLUSIVE'.
And Two which dictate if the individual candles in 'Stomach' patterns need to be 'Tall'.
8 — Floating point values which affect 11 different patterns:
One which determines the distance the close of the first candle in a 'Hammer Inverted' pattern must be to the low to be considered valid.
One which affects how close the opens/closes need to be for all 'Lines' patterns (Bull/Bear Meeting/Separating Lines).
One that allows some leeway with the 'Matching Low' pattern (gives a small range the second candle close may be within instead of needing to match the previous close).
Three tolerances for On Neck/In Neck patterns (2 and 1 respectively).
A tolerance for the Thrusting pattern which give a range the close the second candle may be between the midpoint and close of the first to be considered 'valid'.
A tolerance for the two Tweezers patterns that specifies how close the highs and lows of the patterns need to be to each other to be 'valid'.
The first On Neck tolerance specifies how large the lower wick of the first candle may be (as a % of that candle's range) before the pattern is invalidated. The second tolerance specifies how far up the lower wick to the close the second candle's close may be for this pattern. The third tolerance for the In Neck pattern determines how far into the body of the first candle the second may close to be 'valid'.
For the remaining patterns (3, 4, and 5 candles), there are:
3 — Settings for the Deliberation pattern:
A boolean which forces the open of the third candle to gap above the close of the second.
A tolerance which changes the proximity of the third candle's open to the second candle's close in this pattern.
A tolerance that sets the maximum size the third candle may be compared to the average of the first two candles.
One boolean value for the Two Crows patterns (standard and Upside Gapping) that forces the first two candles in the patterns to completely gap if disabled (candle 1's close < candle 2's low).
10 — Floating point values for the remaining patterns:
One tolerance for defining how much the size of each candle in the Identical Black Crows pattern may deviate from the average of themselves to be considered valid.
One tolerance for setting how close the opens/closes of certain three candle patterns may be to each other's opens/closes.*
Three floating point values that affect the Three Stars in the South pattern.
One tolerance for the Side-by-Side patterns - looks at the second and third candle closes.
One tolerance for the Stick Sandwich pattern - looks at the first and third candle closes.
A floating value that sizes the Concealing Baby Swallow pattern's 3rd candle wick.
Two values for the Ladder Bottom pattern which define a range that the third candle's wick size may be.
* This affects the Three Black Crows (non-identical) and Three White Soldiers patterns, each require the opens and closes of every candle to be near each other.
The first tolerance of the Three Stars in the South pattern affects the first candle body's center position, and defines where it must be above to be considered valid. The second tolerance specifies how close the second candle must be to this same position, as well as the deviation the ratio the candle body to its range may be in comparison to the first candle. The third restricts how large the second candle range may be in comparison to the first (prevents this pattern from being recognized if the second candle is similar to the first but larger).
The last two floating point values define upper and lower limits to the wick size of a Ladder Bottom's fourth candle to be considered valid.
█ HOW TO USE
While there are many moving parts to this script, I attempted to set the default values with what I believed may help identify the most patterns within reasonable definitions. When this script is applied to a chart, the Candle Detection Mode (along with the BREAKOUT settings) and all candle switches must be confirmed before patterns are displayed. All switches are on by default, so this gives the user an opportunity to pick which patterns to identify first before playing around in the settings.
All of the settings/inputs described above are meant for experimentation. I encourage the user to tweak these values at will to find which set ups work best for whichever charts they decide to apply these patterns to.
Refer to the patterns themselves during experimentation. The statistic information provided on the tooltips of the patterns are meant to help guide input decisions. The breadth of candlestick theory is deep, and this was an attempt at capturing what I could in its sea of information.
█ LIMITATIONS
DISCLAIMER: While it may seem a bit paradoxical that this script aims to use past performance to potentially measure future results, past performance is not indicative of future results . Markets are highly adaptive and often unpredictable. This script is meant as an informational tool to show how patterns may behave. There is no guarantee that confidence intervals (or any other metric measured with this script) are accurate to the performance of patterns; caution must be exercised with all patterns identified regardless of how much information regarding prior performance is available.
Candlestick Theory - In the name, Candlestick Theory is a theory , and all theories come with their own limits. Some patterns identified by this script may be completely useless/unprofitable/unpredictable regardless of whatever combination of settings are used to identify them. However, if I truly believed this theory had no merit, this script would not exist. It is important to understand that this is a tool meant to be utilized with an array of others to procure positive (or negative, looking at you, short sellers ) results when navigating the complex world of finance.
To address the functionality note however, this script has an offset of 1 by default. Patterns will not be identified on the currently closing candle, only on the candle which has most recently closed. Attempting to have this script do both (offset by one or identify on close) lead to more trouble than it was worth. I personally just want users to be aware that patterns will not be identified immediately when they appear.
Trend Direction - Moving Averages - There is a small quirk with how MA settings will be adjusted if the user inputs two moving averages of the same length when the "MA Setting" is set to 'BOTH'. If Moving Averages have the same length, this script will default to only using MA 1 regardless of if the types of Moving Averages are different . I will experiment in the future to alleviate/reduce this restriction.
Price Analysis - BREAKOUT mode - With how identifying patterns with a look-ahead confirmation works, the percent returns for patterns that break out in either direction will be calculated on the same candle regardless of if P/L Offset is set to 'FROM CONFIRMATION' or 'FROM APPEARANCE'. This same issue is present in the Hikkake Hunter script mentioned earlier. This does not mean the P/L calculations are incorrect , the offset for the calculation is set by the number of candles required to confirm the pattern if 'FROM APPEARANCE' is selected. It just means that these two different P/L calculations will complete at the same time independent of the setting that's been selected.
Adaptive Coloring/Hard Limiting - Hard Limiting is only used with Adaptive Coloring and has no effect outside of it. If Hard Limiting is used, it is recommended to increase the 'Positive' and 'Negative' return tolerance values as a pattern's bullish/bearishness may be disproportionately represented with the gradient generated under a hard limit.
TARGET MODE - This mode will break rules regarding patterns that are overridden on purpose. If a pattern selected in TARGET mode would have otherwise been absorbed by a larger pattern, it will have that pattern's percent return calculated; potentially leading to duplicate returns being included in the matrix of all returns recognized by this script.
'Tall' Candle Setting - This is a wide-reaching setting, as approximately 30 different patterns or so rely on defining 'Tall' candles. Changing how 'Tall' candles are defined whether by the tolerance value those candles need to exceed or by the values of the candle used for the baseline comparison (RANGE/BODY) can wildly affect how this script functions under certain conditions. Refer to the tooltip of these settings for more information on which specific patterns are affected by this.
Doji Settings - There are roughly 10 or so two to three candle patterns which have Dojis as a part of them. If all Dojis are disabled, it will prevent some of these larger patterns from being recognized. This is a dependency issue that I may address in the future.
'Engulfing' Setting - Functionally, the two 'Engulfing' settings are quite different. Because of this, the 'RANGE' setting may cause certain patterns that would otherwise be valid under textbook and online references/definitions to not be recognized as such (like the Upside Gap Two Crows or Three Outside down).
█ PATTERN LIST
This script recognizes 85 patterns upon initial release. I am open to adding additional patterns to it in the future and any comments/suggestions are appreciated. It recognizes:
15 — 1 Candle Patterns
4 Hammer type patterns: Regular Hammer, Takuri Line, Shooting Star, and Hanging Man
9 Doji Candles: Regular Dojis, Northern/Southern Dojis, Gravestone/Dragonfly Dojis, Gapping Up/Down Dojis, and Long-Legged/Rickshaw Man Dojis
White/Black Long Days
32 — 2 Candle Patterns
4 Engulfing type patterns: Bullish/Bearish Engulfing and Last Engulfing Top/Bottom
Dark Cloud Cover
Bullish/Bearish Doji Star patterns
Hammer Inverted
Bullish/Bearish Haramis + Cross variants
Homing Pigeon
Bullish/Bearish Kicking
4 Lines type patterns: Bullish/Bearish Meeting/Separating Lines
Matching Low
On/In Neck patterns
Piercing pattern
Shooting Star (2 Lines)
Above/Below Stomach patterns
Thrusting
Tweezers Top/Bottom patterns
Two Black Gapping
Rising/Falling Window patterns
29 — 3 Candle Patterns
Bullish/Bearish Abandoned Baby patterns
Advance Block
Collapsing Doji Star
Deliberation
Upside/Downside Gap Three Methods patterns
Three Inside/Outside Up/Down patterns (4 total)
Bullish/Bearish Side-by-Side patterns
Morning/Evening Star patterns + Doji variants
Stick Sandwich
Downside/Upside Tasuki Gap patterns
Three Black Crows + Identical variation
Three White Soldiers
Three Stars in the South
Bullish/Bearish Tri-Star patterns
Two Crows + Upside Gap variant
Unique Three River Bottom
3 — 4 Candle Patterns
Concealing Baby Swallow
Bullish/Bearish Three Line Strike patterns
6 — 5 Candle Patterns
Bullish/Bearish Breakaway patterns
Ladder Bottom
Mat Hold
Rising/Falling Three Methods patterns
█ WORKS CITED
Because of the amount of time needed to complete this script, I am unable to provide exact dates for when some of these references were used. I will also not provide every single reference, as citing a reference for each individual pattern and the place it was reviewed would lead to a bibliography larger than this script and its description combined. There were five major resources I used when building this script, one book, two websites (for various different reasons including patterns, moving averages, and various other articles of information), various scripts from TradingView's public library (including TradingView's own source code for *all* candle patterns ), and PineScrypt's reference manual.
Bulkowski, Thomas N. Encyclopedia of Candlestick Patterns . Hoboken, New Jersey: John Wiley & Sons Inc., 2008. E-book (google books).
Various. Numerous webpages. CandleScanner . 2023. online. Accessed 2020 - 2023.
Various. Numerous webpages. Investopedia . 2023. online. Accessed 2020 - 2023.
█ AKNOWLEDGEMENTS
I want to take the time here to thank all of my friends and family, both online and in real life, for the support they've given me over the last few years in this endeavor. My pets who tried their hardest to keep me from completing it. And work for the grit to continue pushing through until this script's completion.
This belongs to me just as much as it does anyone else. Whether you are an institutional trader, gold bug hedging against the dollar, retail ape who got in on a squeeze, or just parents trying to grow their retirement/save for the kids. This belongs to everyone.
Private Beta for new features to be tested can be found here .
Vires In Numeris
Concept Probability ConeThe Concept Probability Cone is a mathematical indicator designed to demonstrate the potential price range of an asset based on its historical volatility and statistical probabilities. Unlike most publicly available probability cone scripts, which often contain inaccuracies and oversimplifications, this tool is developed with a strong focus on precision and accuracy. It is important to note, however, that the Concept Probability Cone is currently in its initial stage, and further improvements and refinements may be introduced over time.
One significant difference between the Concept Probability Cone and other publicly available scripts is the incorporation of inverse Cumulative Distribution Functions (CDFs) in its calculations. Inverse CDFs are used to map a random variable's probability distribution to its corresponding quantile, which helps in determining the asset's price boundaries with a higher level of precision. This key feature sets the Concept Probability Cone apart from other tools, addressing the flaws found in many existing probability cone scripts.
This is a proof of concept indicator. Users are encouraged to play around with the tool, explore its features, and gain a deeper understanding of the statistical principles it demonstrates.
Swing Counter [theEccentricTrader]█ OVERVIEW
This indicator counts the number of confirmed swing high and swing low scenarios on any given candlestick chart and displays the statistics in a table, which can be repositioned and resized at the user's discretion.
█ CONCEPTS
Green and Red Candles
• A green candle is one that closes with a high price equal to or above the price it opened.
• A red candle is one that closes with a low price that is lower than the price it opened.
Swing Highs and Swing Lows
• A swing high is a green candle or series of consecutive green candles followed by a single red candle to complete the swing and form the peak.
• A swing low is a red candle or series of consecutive red candles followed by a single green candle to complete the swing and form the trough.
Peak and Trough Prices (Basic)
• The peak price of a complete swing high is the high price of either the red candle that completes the swing high or the high price of the preceding green candle, depending on which is higher.
• The trough price of a complete swing low is the low price of either the green candle that completes the swing low or the low price of the preceding red candle, depending on which is lower.
Peak and Trough Prices (Advanced)
• The advanced peak price of a complete swing high is the high price of either the red candle that completes the swing high or the high price of the highest preceding green candle high price, depending on which is higher.
• The advanced trough price of a complete swing low is the low price of either the green candle that completes the swing low or the low price of the lowest preceding red candle low price, depending on which is lower.
Green and Red Peaks and Troughs
• A green peak is one that derives its price from the green candle/s that constitute the swing high.
• A red peak is one that derives its price from the red candle that completes the swing high.
• A green trough is one that derives its price from the green candle that completes the swing low.
• A red trough is one that derives its price from the red candle/s that constitute the swing low.
Historic Peaks and Troughs
The current, or most recent, peak and trough occurrences are referred to as occurrence zero. Previous peak and trough occurrences are referred to as historic and ordered numerically from right to left, with the most recent historic peak and trough occurrences being occurrence one.
Upper Trends
• A return line uptrend is formed when the current peak price is higher than the preceding peak price.
• A downtrend is formed when the current peak price is lower than the preceding peak price.
• A double-top is formed when the current peak price is equal to the preceding peak price.
Lower Trends
• An uptrend is formed when the current trough price is higher than the preceding trough price.
• A return line downtrend is formed when the current trough price is lower than the preceding trough price.
• A double-bottom is formed when the current trough price is equal to the preceding trough price.
█ FEATURES
Inputs
• Start Date
• End Date
• Position
• Text Size
• Show Sample Period
• Show Plots
• Show Lines
Table
The table is colour coded, consists of three columns and nine rows. Blue cells denote neutral scenarios, green cells denote return line uptrend and uptrend scenarios, and red cells denote downtrend and return line downtrend scenarios.
The swing scenarios are listed in the first column with their corresponding total counts to the right, in the second column. The last row in column one, row nine, displays the sample period which can be adjusted or hidden via indicator settings.
Rows three and four in the third column of the table display the total higher peaks and higher troughs as percentages of total peaks and troughs, respectively. Rows five and six in the third column display the total lower peaks and lower troughs as percentages of total peaks and troughs, respectively. And rows seven and eight display the total double-top peaks and double-bottom troughs as percentages of total peaks and troughs, respectively.
Plots
I have added plots as a visual aid to the swing scenarios listed in the table. Green up-arrows with ‘HP’ denote higher peaks, while green up-arrows with ‘HT’ denote higher troughs. Red down-arrows with ‘LP’ denote higher peaks, while red down-arrows with ‘LT’ denote lower troughs. Similarly, blue diamonds with ‘DT’ denote double-top peaks and blue diamonds with ‘DB’ denote double-bottom troughs. These plots can be hidden via indicator settings.
Lines
I have also added green and red trendlines as a further visual aid to the swing scenarios listed in the table. Green lines denote return line uptrends (higher peaks) and uptrends (higher troughs), while red lines denote downtrends (lower peaks) and return line downtrends (lower troughs). These lines can be hidden via indicator settings.
█ HOW TO USE
This indicator is intended for research purposes and strategy development. I hope it will be useful in helping to gain a better understanding of the underlying dynamics at play on any given market and timeframe. It can, for example, give you an idea of any inherent biases such as a greater proportion of higher peaks to lower peaks. Or a greater proportion of higher troughs to lower troughs. Such information can be very useful when conducting top down analysis across multiple timeframes, or considering entry and exit methods.
What I find most fascinating about this logic, is that the number of swing highs and swing lows will always find equilibrium on each new complete wave cycle. If for example the chart begins with a swing high and ends with a swing low there will be an equal number of swing highs to swing lows. If the chart starts with a swing high and ends with a swing high there will be a difference of one between the two total values until another swing low is formed to complete the wave cycle sequence that began at start of the chart. Almost as if it was a fundamental truth of price action, although quite common sensical in many respects. As they say, what goes up must come down.
The objective logic for swing highs and swing lows I hope will form somewhat of a foundational building block for traders, researchers and developers alike. Not only does it facilitate the objective study of swing highs and swing lows it also facilitates that of ranges, trends, double trends, multi-part trends and patterns. The logic can also be used for objective anchor points. Concepts I will introduce and develop further in future publications.
█ LIMITATIONS
Some higher timeframe candles on tickers with larger lookbacks such as the DXY , do not actually contain all the open, high, low and close (OHLC) data at the beginning of the chart. Instead, they use the close price for open, high and low prices. So, while we can determine whether the close price is higher or lower than the preceding close price, there is no way of knowing what actually happened intra-bar for these candles. And by default candles that close at the same price as the open price, will be counted as green. You can avoid this problem by utilising the sample period filter.
The green and red candle calculations are based solely on differences between open and close prices, as such I have made no attempt to account for green candles that gap lower and close below the close price of the preceding candle, or red candles that gap higher and close above the close price of the preceding candle. I can only recommend using 24-hour markets, if and where possible, as there are far fewer gaps and, generally, more data to work with. Alternatively, you can replace the scenarios with your own logic to account for the gap anomalies, if you are feeling up to the challenge.
The sample size will be limited to your Trading View subscription plan. Premium users get 20,000 candles worth of data, pro+ and pro users get 10,000, and basic users get 5,000. If upgrading is currently not an option, you can always keep a rolling tally of the statistics in an excel spreadsheet or something of the like.
█ NOTES
I feel it important to address the mention of advanced peak and trough price logic. While I have introduced the concept, I have not included the logic in my script for a number of reasons. The most pertinent of which being the amount of extra work I would have to do to include it in a public release versus the actual difference it would make to the statistics. Based on my experience, there are actually only a small number of cases where the advanced peak and trough prices are different from the basic peak and trough prices. And with adequate multi-timeframe analysis any high or low prices that are not captured using basic peak and trough price logic on any given time frame, will no doubt be captured on a higher timeframe. See the example below on the 1H FOREXCOM:USDJPY chart (Figure 1), where the basic peak price logic denoted by the indicator plot does not capture what would be the advanced peak price, but on the 2H FOREXCOM:USDJPY chart (Figure 2), the basic peak logic does capture the advanced peak price from the 1H timeframe.
Figure 1.
Figure 2.
█ RAMBLINGS
“Never was there an age that placed economic interests higher than does our own. Never was the need of a scientific foundation for economic affairs felt more generally or more acutely. And never was the ability of practical men to utilize the achievements of science, in all fields of human activity, greater than in our day. If practical men, therefore, rely wholly on their own experience, and disregard our science in its present state of development, it cannot be due to a lack of serious interest or ability on their part. Nor can their disregard be the result of a haughty rejection of the deeper insight a true science would give into the circumstances and relationships determining the outcome of their activity. The cause of such remarkable indifference must not be sought elsewhere than in the present state of our science itself, in the sterility of all past endeavours to find its empirical foundations.” (Menger, 1871, p.45).
█ BIBLIOGRAPHY
Menger, C. (1871) Principles of Economics. Reprint, Auburn, Alabama: Ludwig Von Mises Institute: 2007.
ValueViewTitle: ValueView
Description:
ValueView is a script designed to cater to the needs of value investors. Its primary purpose is to provide a comprehensive overview of the financial performance of a stock, making it easier for investors to assess the intrinsic value and potential investment opportunities.
The script displays a concise summary of essential fundamental values and metrics in the form of a customizable table, directly integrated into the chart. This allows investors to evaluate the stock's performance for a variable number of fiscal years, as defined by the user. The input flexibility enables users to focus on the timeframes that are most relevant to their analysis.
ValueView works on timeframes greater than or equal to "DAY", ensuring that the data presented is reliable and relevant for long-term value investing strategies. With this feature, investors can focus on the bigger picture and avoid getting distracted by short-term fluctuations.
With ValueView, investors can choose to select or deselect specific metrics according to their investment strategy and preferences. This feature ensures that users are presented with the information they find most valuable, allowing them to make more informed decisions based on their unique perspective.
Key Features:
Quick overview of the financial performance of a stock for value investors
Customizable table displaying essential fundamental values and metrics
User-defined number of fiscal years for analysis
Select and deselect metrics to tailor the output to individual preferences
ValueView offers a convenient, time-saving solution for value investors looking to gain a deep understanding of a stock's financial performance. With its customizable features and easy-to-use interface, this script simplifies the process of identifying promising investments and making informed decisions.
Cuck WickAcknowledgement
This indicator is dedicated to my friend Alexandru who saved me from one of these scam cuck wicks which almost liquidated me.
Alexandru is one of the best scalpers out there and he always nails his entries at the tip of these wicks.
This inspired me to create this indicator.
What's a cuck wick?
It's that fast stop-hunting wick that cucks everyone by triggering their stop-loss and liquidation.
Liquidity is the lifeblood of stock market and liquidation is the process that moves price.
This indicator will identify when a liquidity pool is getting raided to trigger buy or sell stops, they are also know as stop-hunts.
How does it work?
When market consolidates in one direction, it builds up liquidity zones.
Market maker will break out of these consolidation phases by having dramatic price action to either pump or dump to raid these liquidity zones.
This is also called stop-hunts or liquidity raids. After that it will start reversing back to the opposite direction.
This is most noticeable by the length of the wick of a given candle in a very short amount of time and the total size of the candle.
This indicator highlights them accordingly.
Settings
Wick and Candle ratio works with default values but finetune will enhance user experience and usability.
Wick Ratio: Size of the wick compared to body of a candle.
Adjust this to higher ratio on smaller timeframe or smaller ratio on bigger timeframe to your trading style to spot a trend reversal.
Candle Ratio: The size of the candle, by default it is 0.75% of the current price.
For example, if BTC is at 20,000 then the size of the candle has to be minimum 150.
This can be fine tuned to bigger candle size on higher time frames or smaller for shorter timeframe depending on the trade type.
How to use it?
This indicator will identify when a liquidity pool is getting raided to trigger buy or sell stops, they are also know as stop-hunts. It can be used of its own for scalping but there are also a good few indicators which would most definitely help to confluence bigger timeframe trades.
Scalp
This indicator shows the most chaotic moments in price action; therefore it works best on smaller timeframes, ideally 3 or 5 minute candle.
- Wait for the market to start pumping or dumping.
- Current candle will change colour (Bullish/Bearish).
- Enter trade as soon as price starts to reverse back.
- Place the stop-loss outside of the current candle.
- Wait for the cuck wick to appear as confirmation.
Price is very chaotic during a liquidity stop-hunt raid but there is a saying:
"In the midst of chaos, there is also opportunity" - Sun-Tzu
Since this is a very high risk, high reward strategy; it is advised to practice on paper trade first.
Practice until perfection and this indicator would be the perfect bread and butter scalp confirmation.
Fair Value Gap
FVG strategy is the most accurate in conjunction with this indicator.
Normally price would reverse after consuming fair value gaps but often it's difficult to know when and where.
This indicator would identify those crucial entry points for reverse course direction of the price action.
Support and Resistance
This indicator can also be used in conjunction with support and resistance lines.
Generally the cuck will go deep below the support or spike much further up the resistance lines to liquidate positions.
Bollinger Bands
Bolling Bands strategy would be to wait until the price breaks out of the band.
Once the wick is formed, it would be an ideal entry point.
Script change
This is an open-source script and feel free to modify according to your need and to amplify your existing strategy.
Flat Market and Low ADX Indicator [CHE]Why use the Flat Market and Low ADX Indicator ?
Flat markets, where prices remain within a narrow range for an extended period, can be both critical and dangerous for traders. In a flat market, the price action becomes less predictable, and traders may struggle to find profitable trading opportunities. As a result, many traders may decide to take a break from the market until a clear trend emerges.
However, flat markets can also be dangerous for traders who continue to trade despite the lack of clear trends. In the absence of a clear direction, traders may be tempted to take larger risks or make impulsive trades in an attempt to capture small profits. Such behavior can quickly lead to significant losses, especially if the market suddenly breaks out of its flat range, causing traders to experience large drawdowns.
Therefore, it is essential to approach flat markets with caution and to have a clear trading plan that incorporates strategies for both trending and flat markets. Traders may also use technical indicators, such as the Flat Market and Low ADX Indicator, to help identify flat markets and determine when it is appropriate to enter or exit a position.
The confluence between flat markets and low ADX readings can further increase the risk of trading during these periods. The ADX (Average Directional Index) is a technical indicator used to measure the strength of a trend. A low ADX reading indicates that the market is in a consolidation phase, which can coincide with a flat market. When a flat market occurs during a period of low ADX, traders should be even more cautious, as there is little to no directional bias in the market. In this situation, traders may want to consider waiting for a clear trend to emerge or using range-bound trading strategies to avoid taking excessive risks.
Introduction:
Pine Script is a programming language used for developing custom technical analysis indicators and trading strategies in TradingView. This particular script is an indicator designed to identify flat markets and low ADX conditions. In this description, we will delve deeper into the functionality of this script and how it can be used to improve trading decisions.
Description:
The first input in the script is the length of the moving average used for calculating the center line. This moving average is used to define the high and low range of the market. The script then calculates the middle value of the range by taking the double exponential moving average (EMA) of the high, low, and close prices.
The script then determines whether the market is flat by comparing the middle value of the range with the high and low values. If the middle value is greater than the high value or less than the low value, the market is not flat. If the middle value is within the high and low range, the script considers the market to be flat. The script also uses RSI filter settings to further confirm if the market is flat or not. If the RSI value is between the RSI min and max values, then the market is considered flat. If the RSI value is outside this range, the market is not considered flat.
The script also calculates the ADX (Average Directional Index) to determine whether it's in a low area. ADX is a technical indicator used to measure the strength of a trend. The script uses the ADX filter settings to define the ADX threshold value. If the ADX value is below the threshold value, the script considers the market to be in a low ADX area.
The script provides various input options to customize the display settings, including the option to show the flat market and low ADX areas. Users can choose their preferred colors for the flat market and low ADX areas and adjust the transparency levels to suit their needs.
Conclusion:
In conclusion, this Pine Script indicator is designed to identify flat market and low ADX conditions, which can help traders make informed trading decisions. The script uses a range of inputs and calculations to determine the market direction, RSI filter, and ADX filter. By customizing the display settings, users can adjust the indicator to suit their preferences and improve their trading strategies. Overall, this script can be a valuable tool for traders looking to gain an edge in the markets.
Acknowledgments:
Thanks to the Pine Script™ v5 User Manual www.tradingview.com
Vector2Library "Vector2"
Representation of two dimensional vectors or points.
This structure is used to represent positions in two dimensional space or vectors,
for example in spacial coordinates in 2D space.
~~~
references:
docs.unity3d.com
gist.github.com
github.com
gist.github.com
gist.github.com
gist.github.com
~~~
new(x, y)
Create a new Vector2 object.
Parameters:
x : float . The x value of the vector, default=0.
y : float . The y value of the vector, default=0.
Returns: Vector2. Vector2 object.
-> usage:
`unitx = Vector2.new(1.0) , plot(unitx.x)`
from(value)
Assigns value to a new vector `x,y` elements.
Parameters:
value : float, x and y value of the vector.
Returns: Vector2. Vector2 object.
-> usage:
`one = Vector2.from(1.0), plot(one.x)`
from(value, element_sep, open_par, close_par)
Assigns value to a new vector `x,y` elements.
Parameters:
value : string . The `x` and `y` value of the vector in a `x,y` or `(x,y)` format, spaces and parentesis will be removed automatically.
element_sep : string . Element separator character, default=`,`.
open_par : string . Open parentesis character, default=`(`.
close_par : string . Close parentesis character, default=`)`.
Returns: Vector2. Vector2 object.
-> usage:
`one = Vector2.from("1.0,2"), plot(one.x)`
copy(this)
Creates a deep copy of a vector.
Parameters:
this : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = Vector2.new(1.0) , b = a.copy() , plot(b.x)`
down()
Vector in the form `(0, -1)`.
Returns: Vector2. Vector2 object.
left()
Vector in the form `(-1, 0)`.
Returns: Vector2. Vector2 object.
right()
Vector in the form `(1, 0)`.
Returns: Vector2. Vector2 object.
up()
Vector in the form `(0, 1)`.
Returns: Vector2. Vector2 object.
one()
Vector in the form `(1, 1)`.
Returns: Vector2. Vector2 object.
zero()
Vector in the form `(0, 0)`.
Returns: Vector2. Vector2 object.
minus_one()
Vector in the form `(-1, -1)`.
Returns: Vector2. Vector2 object.
unit_x()
Vector in the form `(1, 0)`.
Returns: Vector2. Vector2 object.
unit_y()
Vector in the form `(0, 1)`.
Returns: Vector2. Vector2 object.
nan()
Vector in the form `(float(na), float(na))`.
Returns: Vector2. Vector2 object.
xy(this)
Return the values of `x` and `y` as a tuple.
Parameters:
this : Vector2 . Vector2 object.
Returns: .
-> usage:
`a = Vector2.new(1.0, 1.0) , = a.xy() , plot(ax)`
length_squared(this)
Length of vector `a` in the form. `a.x^2 + a.y^2`, for comparing vectors this is computationaly lighter.
Parameters:
this : Vector2 . Vector2 object.
Returns: float. Squared length of vector.
-> usage:
`a = Vector2.new(1.0, 1.0) , plot(a.length_squared())`
length(this)
Magnitude of vector `a` in the form. `sqrt(a.x^2 + a.y^2)`
Parameters:
this : Vector2 . Vector2 object.
Returns: float. Length of vector.
-> usage:
`a = Vector2.new(1.0, 1.0) , plot(a.length())`
normalize(a)
Vector normalized with a magnitude of 1, in the form. `a / length(a)`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = normalize(Vector2.new(3.0, 2.0)) , plot(a.y)`
isNA(this)
Checks if any of the components is `na`.
Parameters:
this : Vector2 . Vector2 object.
Returns: bool.
usage:
p = Vector2.new(1.0, na) , plot(isNA(p)?1:0)
add(a, b)
Adds vector `b` to `a`, in the form `(a.x + b.x, a.y + b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = one() , c = add(a, b) , plot(c.x)`
add(a, b)
Adds vector `b` to `a`, in the form `(a.x + b, a.y + b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = 1.0 , c = add(a, b) , plot(c.x)`
add(a, b)
Adds vector `b` to `a`, in the form `(a + b.x, a + b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 1.0 , b = one() , c = add(a, b) , plot(c.x)`
subtract(a, b)
Subtract vector `b` from `a`, in the form `(a.x - b.x, a.y - b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = one() , c = subtract(a, b) , plot(c.x)`
subtract(a, b)
Subtract vector `b` from `a`, in the form `(a.x - b, a.y - b)`.
Parameters:
a : Vector2 . vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = 1.0 , c = subtract(a, b) , plot(c.x)`
subtract(a, b)
Subtract vector `b` from `a`, in the form `(a - b.x, a - b.y)`.
Parameters:
a : float . value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 1.0 , b = one() , c = subtract(a, b) , plot(c.x)`
multiply(a, b)
Multiply vector `a` with `b`, in the form `(a.x * b.x, a.y * b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = one() , c = multiply(a, b) , plot(c.x)`
multiply(a, b)
Multiply vector `a` with `b`, in the form `(a.x * b, a.y * b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = 1.0 , c = multiply(a, b) , plot(c.x)`
multiply(a, b)
Multiply vector `a` with `b`, in the form `(a * b.x, a * b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 1.0 , b = one() , c = multiply(a, b) , plot(c.x)`
divide(a, b)
Divide vector `a` with `b`, in the form `(a.x / b.x, a.y / b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = from(2.0) , c = divide(a, b) , plot(c.x)`
divide(a, b)
Divide vector `a` with value `b`, in the form `(a.x / b, a.y / b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = 2.0 , c = divide(a, b) , plot(c.x)`
divide(a, b)
Divide value `a` with vector `b`, in the form `(a / b.x, a / b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 3.0 , b = from(2.0) , c = divide(a, b) , plot(c.x)`
negate(a)
Negative of vector `a`, in the form `(-a.x, -a.y)`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = a.negate , plot(b.x)`
pow(a, b)
Raise vector `a` with exponent vector `b`, in the form `(a.x ^ b.x, a.y ^ b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = from(2.0) , c = pow(a, b) , plot(c.x)`
pow(a, b)
Raise vector `a` with value `b`, in the form `(a.x ^ b, a.y ^ b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = 2.0 , c = pow(a, b) , plot(c.x)`
pow(a, b)
Raise value `a` with vector `b`, in the form `(a ^ b.x, a ^ b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 3.0 , b = from(2.0) , c = pow(a, b) , plot(c.x)`
sqrt(a)
Square root of the elements in a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = sqrt(a) , plot(b.x)`
abs(a)
Absolute properties of the vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(-3.0) , b = abs(a) , plot(b.x)`
min(a)
Lowest element of a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = min(a) , plot(b)`
max(a)
Highest element of a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = max(a) , plot(b)`
vmax(a, b)
Highest elements of two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = vmax(a, b) , plot(c.x)`
vmax(a, b, c)
Highest elements of three vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = new(1.5, 4.5) , d = vmax(a, b, c) , plot(d.x)`
vmin(a, b)
Lowest elements of two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = vmin(a, b) , plot(c.x)`
vmin(a, b, c)
Lowest elements of three vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = new(1.5, 4.5) , d = vmin(a, b, c) , plot(d.x)`
perp(a)
Perpendicular Vector of `a`, in the form `(a.y, -a.x)`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = perp(a) , plot(b.x)`
floor(a)
Compute the floor of vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = floor(a) , plot(b.x)`
ceil(a)
Ceils vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = ceil(a) , plot(b.x)`
ceil(a, digits)
Ceils vector `a`.
Parameters:
a : Vector2 . Vector2 object.
digits : int . Digits to use as ceiling.
Returns: Vector2. Vector2 object.
round(a)
Round of vector elements.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = round(a) , plot(b.x)`
round(a, precision)
Round of vector elements.
Parameters:
a : Vector2 . Vector2 object.
precision : int . Number of digits to round vector "a" elements.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(0.123456, 1.234567) , b = round(a, 2) , plot(b.x)`
fractional(a)
Compute the fractional part of the elements from vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.123456, 1.23456) , b = fractional(a) , plot(b.x)`
dot_product(a, b)
dot_product product of 2 vectors, in the form `a.x * b.x + a.y * b.y.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = dot_product(a, b) , plot(c)`
cross_product(a, b)
cross product of 2 vectors, in the form `a.x * b.y - a.y * b.x`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = cross_product(a, b) , plot(c)`
equals(a, b)
Compares two vectors
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: bool. Representing the equality.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = equals(a, b) ? 1 : 0 , plot(c)`
sin(a)
Compute the sine of argument vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = sin(a) , plot(b.x)`
cos(a)
Compute the cosine of argument vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = cos(a) , plot(b.x)`
tan(a)
Compute the tangent of argument vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = tan(a) , plot(b.x)`
atan2(x, y)
Approximation to atan2 calculation, arc tangent of `y/x` in the range (-pi,pi) radians.
Parameters:
x : float . The x value of the vector.
y : float . The y value of the vector.
Returns: float. Value with angle in radians. (negative if quadrante 3 or 4)
-> usage:
`a = new(3.0, 1.5) , b = atan2(a.x, a.y) , plot(b)`
atan2(a)
Approximation to atan2 calculation, arc tangent of `y/x` in the range (-pi,pi) radians.
Parameters:
a : Vector2 . Vector2 object.
Returns: float, value with angle in radians. (negative if quadrante 3 or 4)
-> usage:
`a = new(3.0, 1.5) , b = atan2(a) , plot(b)`
distance(a, b)
Distance between vector `a` and `b`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = distance(a, b) , plot(c)`
rescale(a, length)
Rescale a vector to a new magnitude.
Parameters:
a : Vector2 . Vector2 object.
length : float . Magnitude.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = 2.0 , c = rescale(a, b) , plot(c.x)`
rotate(a, radians)
Rotates vector by a angle.
Parameters:
a : Vector2 . Vector2 object.
radians : float . Angle value in radians.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = 2.0 , c = rotate(a, b) , plot(c.x)`
rotate_degree(a, degree)
Rotates vector by a angle.
Parameters:
a : Vector2 . Vector2 object.
degree : float . Angle value in degrees.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = 45.0 , c = rotate_degree(a, b) , plot(c.x)`
rotate_around(this, center, angle)
Rotates vector `target` around `origin` by angle value.
Parameters:
this
center : Vector2 . Vector2 object.
angle : float . Angle value in degrees.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = rotate_around(a, b, 45.0) , plot(c.x)`
perpendicular_distance(a, b, c)
Distance from point `a` to line between `b` and `c`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(1.5, 2.6) , b = from(1.0) , c = from(3.0) , d = perpendicular_distance(a, b, c) , plot(d.x)`
project(a, axis)
Project a vector onto another.
Parameters:
a : Vector2 . Vector2 object.
axis : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = project(a, b) , plot(c.x)`
projectN(a, axis)
Project a vector onto a vector of unit length.
Parameters:
a : Vector2 . Vector2 object.
axis : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = projectN(a, b) , plot(c.x)`
reflect(a, axis)
Reflect a vector on another.
Parameters:
a : Vector2 . Vector2 object.
axis
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = reflect(a, b) , plot(c.x)`
reflectN(a, axis)
Reflect a vector to a arbitrary axis.
Parameters:
a : Vector2 . Vector2 object.
axis
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = reflectN(a, b) , plot(c.x)`
angle(a)
Angle in radians of a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = angle(a) , plot(b)`
angle_unsigned(a, b)
unsigned degree angle between 0 and +180 by given two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = angle_unsigned(a, b) , plot(c)`
angle_signed(a, b)
Signed degree angle between -180 and +180 by given two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = angle_signed(a, b) , plot(c)`
angle_360(a, b)
Degree angle between 0 and 360 by given two vectors
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = angle_360(a, b) , plot(c)`
clamp(a, min, max)
Restricts a vector between a min and max value.
Parameters:
a : Vector2 . Vector2 object.
min
max
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = from(2.5) , d = clamp(a, b, c) , plot(d.x)`
clamp(a, min, max)
Restricts a vector between a min and max value.
Parameters:
a : Vector2 . Vector2 object.
min : float . Lower boundary value.
max : float . Higher boundary value.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = clamp(a, 2.0, 2.5) , plot(b.x)`
lerp(a, b, rate)
Linearly interpolates between vectors a and b by rate.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
rate : float . Value between (a:-infinity -> b:1.0), negative values will move away from b.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = lerp(a, b, 0.5) , plot(c.x)`
herp(a, b, rate)
Hermite curve interpolation between vectors a and b by rate.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
rate : Vector2 . Vector2 object. Value between (a:0 > 1:b).
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = from(2.5) , d = herp(a, b, c) , plot(d.x)`
transform(position, mat)
Transform a vector by the given matrix.
Parameters:
position : Vector2 . Source vector.
mat : M32 . Transformation matrix
Returns: Vector2. Transformed vector.
transform(position, mat)
Transform a vector by the given matrix.
Parameters:
position : Vector2 . Source vector.
mat : M44 . Transformation matrix
Returns: Vector2. Transformed vector.
transform(position, mat)
Transform a vector by the given matrix.
Parameters:
position : Vector2 . Source vector.
mat : matrix . Transformation matrix, requires a 3x2 or a 4x4 matrix.
Returns: Vector2. Transformed vector.
transform(this, rotation)
Transform a vector by the given quaternion rotation value.
Parameters:
this : Vector2 . Source vector.
rotation : Quaternion . Rotation to apply.
Returns: Vector2. Transformed vector.
area_triangle(a, b, c)
Find the area in a triangle of vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(1.0, 2.0) , b = from(2.0) , c = from(1.0) , d = area_triangle(a, b, c) , plot(d.x)`
random(max)
2D random value.
Parameters:
max : Vector2 . Vector2 object. Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(2.0) , b = random(a) , plot(b.x)`
random(max)
2D random value.
Parameters:
max : float, Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = random(2.0) , plot(a.x)`
random(min, max)
2D random value.
Parameters:
min : Vector2 . Vector2 object. Vector lower boundary.
max : Vector2 . Vector2 object. Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(1.0) , b = from(2.0) , c = random(a, b) , plot(c.x)`
random(min, max)
2D random value.
Parameters:
min : Vector2 . Vector2 object. Vector lower boundary.
max : Vector2 . Vector2 object. Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = random(1.0, 2.0) , plot(a.x)`
noise(a)
2D Noise based on Morgan McGuire @morgan3d.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(2.0) , b = noise(a) , plot(b.x)`
to_string(a)
Converts vector `a` to a string format, in the form `"(x, y)"`.
Parameters:
a : Vector2 . Vector2 object.
Returns: string. In `"(x, y)"` format.
-> usage:
`a = from(2.0) , l = barstate.islast ? label.new(bar_index, 0.0, to_string(a)) : label(na)`
to_string(a, format)
Converts vector `a` to a string format, in the form `"(x, y)"`.
Parameters:
a : Vector2 . Vector2 object.
format : string . Format to apply transformation.
Returns: string. In `"(x, y)"` format.
-> usage:
`a = from(2.123456) , l = barstate.islast ? label.new(bar_index, 0.0, to_string(a, "#.##")) : label(na)`
to_array(a)
Converts vector to a array format.
Parameters:
a : Vector2 . Vector2 object.
Returns: array.
-> usage:
`a = from(2.0) , b = to_array(a) , plot(array.get(b, 0))`
to_barycentric(this, a, b, c)
Captures the barycentric coordinate of a cartesian position in the triangle plane.
Parameters:
this : Vector2 . Source cartesian coordinate position.
a : Vector2 . Triangle corner `a` vertice.
b : Vector2 . Triangle corner `b` vertice.
c : Vector2 . Triangle corner `c` vertice.
Returns: bool.
from_barycentric(this, a, b, c)
Captures the cartesian coordinate of a barycentric position in the triangle plane.
Parameters:
this : Vector2 . Source barycentric coordinate position.
a : Vector2 . Triangle corner `a` vertice.
b : Vector2 . Triangle corner `b` vertice.
c : Vector2 . Triangle corner `c` vertice.
Returns: bool.
to_complex(this)
Translate a Vector2 structure to complex.
Parameters:
this : Vector2 . Source vector.
Returns: Complex.
to_polar(this)
Translate a Vector2 cartesian coordinate into polar coordinates.
Parameters:
this : Vector2 . Source vector.
Returns: Pole. The returned angle is in radians.
Balance of Force Day of the Week (BOFDW)The script is a custom technical indicator for TradingView that is based on an analysis of the price movements of a financial instrument over the course of a week. The indicator uses a variety of inputs, including the open and close prices for each day of the week, to determine the "BOF" (BOF) for each day.
The BOF is calculated based on the relative magnitude of bullish and bearish price movements and is then used to determine the average BOF over a moving window of data points. This average BOF is displayed on the chart as an overlay, providing a measure of the average bullishness or bearishness of the financial instrument over the course of a week.
The indicator also allows users to specify the location of the overlay on the chart and to customize the appearance of the overlay with options for text and box colors. The script provides a number of built-in options for chart position, including the top-left, top-middle, top-right, middle-left, middle-center, middle-right, bottom-left, bottom-middle, and bottom-right corners of the chart.
Overall, this custom technical indicator is a useful tool for traders and investors who are looking to gain a deeper understanding of the price trends of a financial instrument over the course of a week. By providing a clear and concise measure of the average POF over time, the indicator can help users identify key patterns in the market and make more informed trading decisions.
Any Oscillator Underlay [TTF]We are proud to release a new indicator that has been a while in the making - the Any Oscillator Underlay (AOU) !
Note: There is a lot to discuss regarding this indicator, including its intent and some of how it operates, so please be sure to read this entire description before using this indicator to help ensure you understand both the intent and some limitations with this tool.
Our intent for building this indicator was to accomplish the following:
Combine all of the oscillators that we like to use into a single indicator
Take up a bit less screen space for the underlay indicators for strategies that utilize multiple oscillators
Provide a tool for newer traders to be able to leverage multiple oscillators in a single indicator
Features:
Includes 8 separate, fully-functional indicators combined into one
Ability to easily enable/disable and configure each included indicator independently
Clearly named plots to support user customization of color and styling, as well as manual creation of alerts
Ability to customize sub-indicator title position and color
Ability to customize sub-indicator divider lines style and color
Indicators that are included in this initial release:
TSI
2x RSIs (dubbed the Twin RSI )
Stochastic RSI
Stochastic
Ultimate Oscillator
Awesome Oscillator
MACD
Outback RSI (Color-coding only)
Quick note on OB/OS:
Before we get into covering each included indicator, we first need to cover a core concept for how we're defining OB and OS levels. To help illustrate this, we will use the TSI as an example.
The TSI by default has a mid-point of 0 and a range of -100 to 100. As a result, a common practice is to place lines on the -30 and +30 levels to represent OS and OB zones, respectively. Most people tend to view these levels as distance from the edges/outer bounds or as absolute levels, but we feel a more way to frame the OB/OS concept is to instead define it as distance ("offset") from the mid-line. In keeping with the -30 and +30 levels in our example, the offset in this case would be "30".
Taking this a step further, let's say we decided we wanted an offset of 25. Since the mid-point is 0, we'd then calculate the OB level as 0 + 25 (+25), and the OS level as 0 - 25 (-25).
Now that we've covered the concept of how we approach defining OB and OS levels (based on offset/distance from the mid-line), and since we did apply some transformations, rescaling, and/or repositioning to all of the indicators noted above, we are going to discuss each component indicator to detail both how it was modified from the original to fit the stacked-indicator model, as well as the various major components that the indicator contains.
TSI:
This indicator contains the following major elements:
TSI and TSI Signal Line
Color-coded fill for the TSI/TSI Signal lines
Moving Average for the TSI
TSI Histogram
Mid-line and OB/OS lines
Default TSI fill color coding:
Green : TSI is above the signal line
Red : TSI is below the signal line
Note: The TSI traditionally has a range of -100 to +100 with a mid-point of 0 (range of 200). To fit into our stacking model, we first shrunk the range to 100 (-50 to +50 - cut it in half), then repositioned it to have a mid-point of 50. Since this is the "bottom" of our indicator-stack, no additional repositioning is necessary.
Twin RSI:
This indicator contains the following major elements:
Fast RSI (useful if you want to leverage 2x RSIs as it makes it easier to see the overlaps and crosses - can be disabled if desired)
Slow RSI (primary RSI)
Color-coded fill for the Fast/Slow RSI lines (if Fast RSI is enabled and configured)
Moving Average for the Slow RSI
Mid-line and OB/OS lines
Default Twin RSI fill color coding:
Dark Red : Fast RSI below Slow RSI and Slow RSI below Slow RSI MA
Light Red : Fast RSI below Slow RSI and Slow RSI above Slow RSI MA
Dark Green : Fast RSI above Slow RSI and Slow RSI below Slow RSI MA
Light Green : Fast RSI above Slow RSI and Slow RSI above Slow RSI MA
Note: The RSI naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Stochastic and Stochastic RSI:
These indicators contain the following major elements:
Configurable lengths for the RSI (for the Stochastic RSI only), K, and D values
Configurable base price source
Mid-line and OB/OS lines
Note: The Stochastic and Stochastic RSI both have a normal range of 0 to 100 with a mid-point of 50, so no rescaling or transformations are done on either of these indicators. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Ultimate Oscillator (UO):
This indicator contains the following major elements:
Configurable lengths for the Fast, Middle, and Slow BP/TR components
Mid-line and OB/OS lines
Moving Average for the UO
Color-coded fill for the UO/UO MA lines (if UO MA is enabled and configured)
Default UO fill color coding:
Green : UO is above the moving average line
Red : UO is below the moving average line
Note: The UO naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Awesome Oscillator (AO):
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the AO calculation
Configurable price source for the moving averages used in the AO calculation
Mid-line
Option to display the AO as a line or pseudo-histogram
Moving Average for the AO
Color-coded fill for the AO/AO MA lines (if AO MA is enabled and configured)
Default AO fill color coding (Note: Fill was disabled in the image above to improve clarity):
Green : AO is above the moving average line
Red : AO is below the moving average line
Note: The AO is technically has an infinite (unbound) range - -∞ to ∞ - and the effective range is bound to the underlying security price (e.g. BTC will have a wider range than SP500, and SP500 will have a wider range than EUR/USD). We employed some special techniques to rescale this indicator into our desired range of 100 (-50 to 50), and then repositioned it to have a midpoint of 50 (range of 0 to 100) to meet the constraints of our stacking model. We then do one final repositioning to place it in the correct position the indicator-stack based on which other indicators are also enabled. For more details on how we accomplished this, read our section "Binding Infinity" below.
MACD:
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the MACD calculation
Configurable price source for the moving averages used in the MACD calculation
Configurable length and calculation method for the MACD Signal Line calculation
Mid-line
Note: Like the AO, the MACD also technically has an infinite (unbound) range. We employed the same principles here as we did with the AO to rescale and reposition this indicator as well. For more details on how we accomplished this, read our section "Binding Infinity" below.
Outback RSI (ORSI):
This is a stripped-down version of the Outback RSI indicator (linked above) that only includes the color-coding background (suffice it to say that it was not technically feasible to attempt to rescale the other components in a way that could consistently be clearly seen on-chart). As this component is a bit of a niche/special-purpose sub-indicator, it is disabled by default, and we suggest it remain disabled unless you have some pre-defined strategy that leverages the color-coding element of the Outback RSI that you wish to use.
Binding Infinity - How We Incorporated the AO and MACD (Warning - Math Talk Ahead!)
Note: This applies only to the AO and MACD at time of original publication. If any other indicators are added in the future that also fall into the category of "binding an infinite-range oscillator", we will make that clear in the release notes when that new addition is published.
To help set the stage for this discussion, it's important to note that the broader challenge of "equalizing inputs" is nothing new. In fact, it's a key element in many of the most popular fields of data science, such as AI and Machine Learning. They need to take a diverse set of inputs with a wide variety of ranges and seemingly-random inputs (referred to as "features"), and build a mathematical or computational model in order to work. But, when the raw inputs can vary significantly from one another, there is an inherent need to do some pre-processing to those inputs so that one doesn't overwhelm another simply due to the difference in raw values between them. This is where feature scaling comes into play.
With this in mind, we implemented 2 of the most common methods of Feature Scaling - Min-Max Normalization (which we call "Normalization" in our settings), and Z-Score Normalization (which we call "Standardization" in our settings). Let's take a look at each of those methods as they have been implemented in this script.
Min-Max Normalization (Normalization)
This is one of the most common - and most basic - methods of feature scaling. The basic formula is: y = (x - min)/(max - min) - where x is the current data sample, min is the lowest value in the dataset, and max is the highest value in the dataset. In this transformation, the max would evaluate to 1, and the min would evaluate to 0, and any value in between the min and the max would evaluate somewhere between 0 and 1.
The key benefits of this method are:
It can be used to transform datasets of any range into a new dataset with a consistent and known range (0 to 1).
It has no dependency on the "shape" of the raw input dataset (i.e. does not assume the input dataset can be approximated to a normal distribution).
But there are a couple of "gotchas" with this technique...
First, it assumes the input dataset is complete, or an accurate representation of the population via random sampling. While in most situations this is a valid assumption, in trading indicators we don't really have that luxury as we're often limited in what sample data we can access (i.e. number of historical bars available).
Second, this method is highly sensitive to outliers. Since the crux of this transformation is based on the max-min to define the initial range, a single significant outlier can result in skewing the post-transformation dataset (i.e. major price movement as a reaction to a significant news event).
You can potentially mitigate those 2 "gotchas" by using a mechanism or technique to find and discard outliers (e.g. calculate the mean and standard deviation of the input dataset and discard any raw values more than 5 standard deviations from the mean), but if your most recent datapoint is an "outlier" as defined by that algorithm, processing it using the "scrubbed" dataset would result in that new datapoint being outside the intended range of 0 to 1 (e.g. if the new datapoint is greater than the "scrubbed" max, it's post-transformation value would be greater than 1). Even though this is a bit of an edge-case scenario, it is still sure to happen in live markets processing live data, so it's not an ideal solution in our opinion (which is why we chose not to attempt to discard outliers in this manner).
Z-Score Normalization (Standardization)
This method of rescaling is a bit more complex than the Min-Max Normalization method noted above, but it is also a widely used process. The basic formula is: y = (x – μ) / σ - where x is the current data sample, μ is the mean (average) of the input dataset, and σ is the standard deviation of the input dataset. While this transformation still results in a technically-infinite possible range, the output of this transformation has a 2 very significant properties - the mean (average) of the output dataset has a mean (μ) of 0 and a standard deviation (σ) of 1.
The key benefits of this method are:
As it's based on normalizing the mean and standard deviation of the input dataset instead of a linear range conversion, it is far less susceptible to outliers significantly affecting the result (and in fact has the effect of "squishing" outliers).
It can be used to accurately transform disparate sets of data into a similar range regardless of the original dataset's raw/actual range.
But there are a couple of "gotchas" with this technique as well...
First, it still technically does not do any form of range-binding, so it is still technically unbounded (range -∞ to ∞ with a mid-point of 0).
Second, it implicitly assumes that the raw input dataset to be transformed is normally distributed, which won't always be the case in financial markets.
The first "gotcha" is a bit of an annoyance, but isn't a huge issue as we can apply principles of normal distribution to conceptually limit the range by defining a fixed number of standard deviations from the mean. While this doesn't totally solve the "infinite range" problem (a strong enough sudden move can still break out of our "conceptual range" boundaries), the amount of movement needed to achieve that kind of impact will generally be pretty rare.
The bigger challenge is how to deal with the assumption of the input dataset being normally distributed. While most financial markets (and indicators) do tend towards a normal distribution, they are almost never going to match that distribution exactly. So let's dig a bit deeper into distributions are defined and how things like trending markets can affect them.
Skew (skewness): This is a measure of asymmetry of the bell curve, or put another way, how and in what way the bell curve is disfigured when comparing the 2 halves. The easiest way to visualize this is to draw an imaginary vertical line through the apex of the bell curve, then fold the curve in half along that line. If both halves are exactly the same, the skew is 0 (no skew/perfectly symmetrical) - which is what a normal distribution has (skew = 0). Most financial markets tend to have short, medium, and long-term trends, and these trends will cause the distribution curve to skew in one direction or another. Bullish markets tend to skew to the right (positive), and bearish markets to the left (negative).
Kurtosis: This is a measure of the "tail size" of the bell curve. Another way to state this could be how "flat" or "steep" the bell-shape is. If the bell is steep with a strong drop from the apex (like a steep cliff), it has low kurtosis. If the bell has a shallow, more sweeping drop from the apex (like a tall hill), is has high kurtosis. Translating this to financial markets, kurtosis is generally a metric of volatility as the bell shape is largely defined by the strength and frequency of outliers. This is effectively a measure of volatility - volatile markets tend to have a high level of kurtosis (>3), and stable/consolidating markets tend to have a low level of kurtosis (<3). A normal distribution (our reference), has a kurtosis value of 3.
So to try and bring all that back together, here's a quick recap of the Standardization rescaling method:
The Standardization method has an assumption of a normal distribution of input data by using the mean (average) and standard deviation to handle the transformation
Most financial markets do NOT have a normal distribution (as discussed above), and will have varying degrees of skew and kurtosis
Q: Why are we still favoring the Standardization method over the Normalization method, and how are we accounting for the innate skew and/or kurtosis inherent in most financial markets?
A: Well, since we're only trying to rescale oscillators that by-definition have a midpoint of 0, kurtosis isn't a major concern beyond the affect it has on the post-transformation scaling (specifically, the number of standard deviations from the mean we need to include in our "artificially-bound" range definition).
Q: So that answers the question about kurtosis, but what about skew?
A: So - for skew, the answer is in the formula - specifically the mean (average) element. The standard mean calculation assumes a complete dataset and therefore uses a standard (i.e. simple) average, but we're limited by the data history available to us. So we adapted the transformation formula to leverage a moving average that included a weighting element to it so that it favored recent datapoints more heavily than older ones. By making the average component more adaptive, we gained the effect of reducing the skew element by having the average itself be more responsive to recent movements, which significantly reduces the effect historical outliers have on the dataset as a whole. While this is certainly not a perfect solution, we've found that it serves the purpose of rescaling the MACD and AO to a far more well-defined range while still preserving the oscillator behavior and mid-line exceptionally well.
The most difficult parts to compensate for are periods where markets have low volatility for an extended period of time - to the point where the oscillators are hovering around the 0/midline (in the case of the AO), or when the oscillator and signal lines converge and remain close to each other (in the case of the MACD). It's during these periods where even our best attempt at ensuring accurate mirrored-behavior when compared to the original can still occasionally lead or lag by a candle.
Note: If this is a make-or-break situation for you or your strategy, then we recommend you do not use any of the included indicators that leverage this kind of bounding technique (the AO and MACD at time of publication) and instead use the Trandingview built-in versions!
We know this is a lot to read and digest, so please take your time and feel free to ask questions - we will do our best to answer! And as always, constructive feedback is always welcome!
PSv5 3D Array/Matrix Super Hack"In a world of ever pervasive and universal deceit, telling a simple truth is considered a revolutionary act."
INTRO:
First, how about a little bit of philosophic poetry with another dimension applied to it?
The "matrix of control" is everywhere...
It is all around us, even now in the very place you reside. You can see it when you look at your digitized window outwards into the world, or when you turn on regularly scheduled television "programs" to watch news narratives and movies that subliminally influence your thoughts, feelings, and emotions. You have felt it every time you have clocked into dead end job workplaces... when you unknowingly worshiped on the conformancy alter to cultish ideologies... and when you pay your taxes to a godvernment that is poisoning you softly and quietly by injecting your mind and body with (psyOps + toxicCompounds). It is a fictitiously generated world view that has been pulled over your eyes to blindfold, censor, and mentally prostrate you from spiritually hearing the real truth.
What TRUTH you must wonder? That you are cognitively enslaved, like everyone else. You were born into mental bondage, born into an illusory societal prison complex that you are entirely incapable of smelling, tasting, or touching. Its a contrived monetary prison enterprise for your mind and eternal soul, built by pretending politicians, corporate CONartists, and NonGoverning parasitic Organizations deploying any means of infiltration and deception by using every tactic unimaginable. You are slowly being convinced into becoming a genetically altered cyborg by acclimation, socially engineered and chipped to eventually no longer be 100% human.
Unfortunately no one can be told eloquently enough in words what the matrix of control truly is. You have to experience it and witness it for yourself. This is your chance to program a future paradigm that doesn't yet exist. After visiting here, there is absolutely no turning back. You can continually take the blue pill BIGpharmacide wants you to repeatedly intake. The story ends if you continually sleep walk through a 2D hologram life, believing whatever you wish to believe until you cease to exist. OR, you can take the red pill challenge, explore "question every single thing" wonderland, program your arse off with 3D capabilities, ultimately ascertaining a new mathematical empyrean. Only then can you fully awaken to discover how deep the rabbit hole state of affairs transpire worldwide with a genuine open mind.
Remember, all I'm offering is a mathematical truth, nothing more...
PURPOSE:
With that being said above, it is now time for advanced developers to start creating their own matrix constructs in 3D, in Pine, just as the universe is created spatially. For those of you who instantly know what this script's potential is easily capable of, you already know what you have to do with it. While this is simplistically just a 3D array for either integers or floats, additional companion functions can in the future be constructed by other members to provide a more complete matrix/array library for millions of folks on TV. I do encourage the most courageous of mathemagicians on TV to do so. I have been employing very large 2D/3D array structures for quite some time, and their utility seems to be of great benefit. Discovering that for myself, I fully realized that Pine is incomplete and must be provided with this agility to process complex datasets that traders WILL use in the future. Mark my words!
CONCEPTION:
While I have long realized and theorized this code for a great duration of time, I was finally able to turn it into a Pine reality with the assistance and training of an "artificially intuitive" program while probing its aptitude. Even though it knows virtually nothing about Pine Script 4.0 or 5.0 syntax, functions, and behavior, I was able to conjure code into an identity similar to what you see now within a few minutes. Close enough for me! Many manual edits later for pine compliance, and I had it in chart, presto!
While most people consider the service to be an "AI", it didn't pass my Pine Turing test. I did have to repeatedly correct it, suffered through numerous apologies from it, was forced to use specifically tailored words, and also rationally debate AND argued with it. It is a handy helper but beware of generating Pine code from it, trust me on this one. However... this artificially intuitive service is currently available in its infancy as version 3. Version 4 most likely will have more diversity to enhance my algorithmic expertise of Pine wizardry. I do have to thank E.M. and his developers for an eye opening experience, or NONE of this code below would be available as you now witness it today.
LIMITATIONS:
As of this initial release, Pine only supports 100,000 array elements maximum. For example, when using this code, a 50x50x40 element configuration will exceed this limit, but 50x50x39 will work. You will always have to keep that in mind during development. Running that size of an array structure on every single bar will most likely time out within 20-40 seconds. This is not the most efficient method compared to a real native 3D array in action. Ehlers adepts, this might not be 100% of what you require to "move forward". You can try, but head room with a low ceiling currently will be challenging to walk in for now, even with extremely optimized Pine code.
A few common functions are provided, but this can be extended extensively later if you choose to undertake that endeavor. Use the code as is and/or however you deem necessary. Any TV member is granted absolute freedom to do what they wish as they please. I ultimately wish to eventually see a fully equipped library version for both matrix3D AND array3D created by collaborative efforts that will probably require many Pine poets testing collectively. This is just a bare bones prototype until that day arrives. Considerably more computational server power will be required also. Anyways, I hope you shall find this code somewhat useful.
Notice: Unfortunately, I will not provide any integration support into members projects at all. I have my own projects that require too much of my time already.
POTENTIAL APPLICATIONS:
The creation of very large coefficient 3D caches/buffers specifically at bar_index==0 can dramatically increase runtime agility for thousands of bars onwards. Generating 1000s of values once and just accessing those generated values is much faster. Also, when running dozens of algorithms simultaneously, a record of performance statistics can be kept, self-analyzed, and visually presented to the developer/user. And, everything else under the sun can be created beyond a developers wildest dreams...
EPILOGUE:
Free your mind!!! And unleash weapons of mass financial creation upon the earth for all to utilize via the "Power of Pine". Flying monkeys and minions are waging economic sabotage upon humanity, decimating markets and exchanges. You can always see it your market charts when things go horribly wrong. This is going to be an astronomical technical challenge to continually navigate very choppy financial markets that are increasingly becoming more and more unstable and volatile. Ordinary one plot algorithms simply are not enough anymore. Statistics and analysis sits above everything imagined. This includes banking, godvernment, corporations, REAL science, technology, health, medicine, transportation, energy, food, etc... We have a unique perspective of the world that most people will never get to see, depending on where you look. With an ever increasingly complex world in constant dynamic flux, novel ways to process data intricately MUST emerge into existence in order to tackle phenomenal tasks required in the future. Achieving data analysis in 3D forms is just one lonely step of many more to come.
At this time the WesternEconomicFraudsters and the WorldHealthOrders are attempting to destroy/reset the world's financial status in order to rain in chaos upon most nations, causing asset devaluation and hyper-inflation. Every form of deception, infiltration, and theft is occurring with a result of destroyed wealth in preparation to consolidate it. Open discussions, available to the public, by world leaders/moguls are fantasizing about new dystopian system as a one size fits all nations solution of digitalID combined with programmableDemonicCurrencies to usher in a new form of obedient servitude to a unipolar digitized hegemony of monetary vampires. If they do succeed with economic conquest, as they have publicly stated, people will be converted into human cattle, herded within smart cities, you will own nothing, eat bugs for breakfast/lunch/dinner, live without heat during severe winter conditions, and be happy. They clearly haven't done the math, as they are far outnumbered by a ratio of 1 to millions. Sith Lords do not own planet Earth! The new world disorder of human exploitation will FAIL. History, my "greatest teacher" for decades reminds us over, and over, and over again, and what are time series for anyways? They are for an intense mathematical analysis of prior historical values/conditions in relation to today's values/conditions... I imagine one day we will be able to ask an all-seeing AI, "WHO IS TO BLAME AND WHY AND WHEN?" comprised of 300 pages in great detail with images, charts, and statistics.
What are the true costs of malignant lies? I will tell you... 64bit numbers are NOT even capable of calculating the extreme cost of pernicious lies and deceit. That's how gigantic this monstrous globalization problem has become and how awful the "matrix of control" truly is now. ALL nations need a monumental revision of its CODE OF ETHICS, and that's definitely a multi-dimensional problem that needs solved sooner than later. If it was up to me, economies and technology would be developed so extensively to eliminate scarcity and increase the standard of living so high, that the notion of war and conflict would be considered irrelevant and extremely appalling to the future generations of humanity, our grandchildren born and unborn. The future will not be owned and operated by geriatric robber barons destined to expire quickly. The future will most likely be intensely "guided" by intelligent open source algorithms that youthful generations will inherit as their birth right.
P.S. Don't give me that politco-my-diction crap speech below in comments. If they weren't meddling with economics mucking up 100% of our chart results in 100% of tickers, I wouldn't have any cause to analyze any effects generated by them, nor provide this script's code. I am performing my analytical homework, but have you? Do you you know WHY international affairs are in dire jeopardy? Without why, the "Power of Pine" would have never existed as it specifically does today. I'm giving away much of my mental power generously to TV members so you are specifically empowered beyond most mathematical agilities commonly existing. I'm just a messenger of profound ideas. Loving and loathing of words is ALWAYS in the eye of beholders, and that's why the freedom of speech is enshrined as #1 in the constitutional code of the USA. Without it, this entire site might not have been allowed to exist from its founder's inceptions.