Multivariate Kalman Filter🙏🏻 I see no1 ever posted an open source Multivariate Kalman filter on TV, so here it is, for you. Tested and mathematically correct implementation, with numerical safeties in place that do not affect the final results at all. That’s the main purpose of this drop, just to make the tool available here. Linear algebra everywhere, Neo would approve 4 sure.
...
Personally I haven't found any real use case of it for myself, aside from a very specific one I will explain later, but others usually do…
Almost every1 in the quant industry who uses Kalman is in fact misusing it, because by its real definition, it should be applied to Not the exact known values (e.g as real ticks provided by transparent audited regulated exchange), but “measurements” of those ‘with errors’.
If your measurements don’t have errors or you have real precise data, by its internal formulas Kalman will output the exact inputs. So most who use it come up with some imaginary errors of sorts, like from some kind of imaginary fair price etc. The important easy to miss point, the errors of your measurements have to be symmetric around its mean ‘ at least ’, if errors are biased, Kalman won’t provide.
For most tasks there are better tools, including other state space models , but still Multivariate Kalman is a very powerful instrument, you can make it do all kinds of stuff. Also as a state space model it can also provide confidence & prediction intervals without explicit calculations of dem.
...
In this script I included 2 example use cases, the first one is the classic tho perfectly working misuse, the second one is what I do with it:
One
Naive, estimates “hidden” adaptive moving regression endpoint. The result you can see on the chart above. You can imagine that your real datapoints are in fact non perfect measures of some hidden state, and by defining measurement noise and process noise, and by constructing the input matrixes in certain ways, you can express what that state should be.
Two
Upscaling tick lattice, aka modelling prices as if native tick size would’ve been lower. Kinda very specific task, mostly needed in HFT or just for analytical purposes. Some like ZN have huge tick sizes, they are traded a lot but barely do more than 20 ticks range in a session. The idea is to model raw data as AR2 process , learn the phi1 and phi2, make one point forecasts based on dem, and the process noise would be the variance of errors from these forecasts. The measurement noise here is legit, it’s quantization noise based on tick size, no need in olympic gold in mental gymnastics xd
^^ artificially upscaling ZN futures tick lattice
...
I really made it available there so You guys can take it and some crazy ish with it, just let state space models abduct your conciseness and never look back
∞
Statistics
Dimensional Resonance ProtocolDimensional Resonance Protocol
🌀 CORE INNOVATION: PHASE SPACE RECONSTRUCTION & EMERGENCE DETECTION
The Dimensional Resonance Protocol represents a paradigm shift from traditional technical analysis to complexity science. Rather than measuring price levels or indicator crossovers, DRP reconstructs the hidden attractor governing market dynamics using Takens' embedding theorem, then detects emergence —the rare moments when multiple dimensions of market behavior spontaneously synchronize into coherent, predictable states.
The Complexity Hypothesis:
Markets are not simple oscillators or random walks—they are complex adaptive systems existing in high-dimensional phase space. Traditional indicators see only shadows (one-dimensional projections) of this higher-dimensional reality. DRP reconstructs the full phase space using time-delay embedding, revealing the true structure of market dynamics.
Takens' Embedding Theorem (1981):
A profound mathematical result from dynamical systems theory: Given a time series from a complex system, we can reconstruct its full phase space by creating delayed copies of the observation.
Mathematical Foundation:
From single observable x(t), create embedding vectors:
X(t) =
Where:
• d = Embedding dimension (default 5)
• τ = Time delay (default 3 bars)
• x(t) = Price or return at time t
Key Insight: If d ≥ 2D+1 (where D is the true attractor dimension), this embedding is topologically equivalent to the actual system dynamics. We've reconstructed the hidden attractor from a single price series.
Why This Matters:
Markets appear random in one dimension (price chart). But in reconstructed phase space, structure emerges—attractors, limit cycles, strange attractors. When we identify these structures, we can detect:
• Stable regions : Predictable behavior (trade opportunities)
• Chaotic regions : Unpredictable behavior (avoid trading)
• Critical transitions : Phase changes between regimes
Phase Space Magnitude Calculation:
phase_magnitude = sqrt(Σ ² for i = 0 to d-1)
This measures the "energy" or "momentum" of the market trajectory through phase space. High magnitude = strong directional move. Low magnitude = consolidation.
📊 RECURRENCE QUANTIFICATION ANALYSIS (RQA)
Once phase space is reconstructed, we analyze its recurrence structure —when does the system return near previous states?
Recurrence Plot Foundation:
A recurrence occurs when two phase space points are closer than threshold ε:
R(i,j) = 1 if ||X(i) - X(j)|| < ε, else 0
This creates a binary matrix showing when the system revisits similar states.
Key RQA Metrics:
1. Recurrence Rate (RR):
RR = (Number of recurrent points) / (Total possible pairs)
• RR near 0: System never repeats (highly stochastic)
• RR = 0.1-0.3: Moderate recurrence (tradeable patterns)
• RR > 0.5: System stuck in attractor (ranging market)
• RR near 1: System frozen (no dynamics)
Interpretation: Moderate recurrence is optimal —patterns exist but market isn't stuck.
2. Determinism (DET):
Measures what fraction of recurrences form diagonal structures in the recurrence plot. Diagonals indicate deterministic evolution (trajectory follows predictable paths).
DET = (Recurrence points on diagonals) / (Total recurrence points)
• DET < 0.3: Random dynamics
• DET = 0.3-0.7: Moderate determinism (patterns with noise)
• DET > 0.7: Strong determinism (technical patterns reliable)
Trading Implication: Signals are prioritized when DET > 0.3 (deterministic state) and RR is moderate (not stuck).
Threshold Selection (ε):
Default ε = 0.10 × std_dev means two states are "recurrent" if within 10% of a standard deviation. This is tight enough to require genuine similarity but loose enough to find patterns.
🔬 PERMUTATION ENTROPY: COMPLEXITY MEASUREMENT
Permutation entropy measures the complexity of a time series by analyzing the distribution of ordinal patterns.
Algorithm (Bandt & Pompe, 2002):
1. Take overlapping windows of length n (default n=4)
2. For each window, record the rank order pattern
Example: → pattern (ranks from lowest to highest)
3. Count frequency of each possible pattern
4. Calculate Shannon entropy of pattern distribution
Mathematical Formula:
H_perm = -Σ p(π) · ln(p(π))
Where π ranges over all n! possible permutations, p(π) is the probability of pattern π.
Normalized to :
H_norm = H_perm / ln(n!)
Interpretation:
• H < 0.3 : Very ordered, crystalline structure (strong trending)
• H = 0.3-0.5 : Ordered regime (tradeable with patterns)
• H = 0.5-0.7 : Moderate complexity (mixed conditions)
• H = 0.7-0.85 : Complex dynamics (challenging to trade)
• H > 0.85 : Maximum entropy (nearly random, avoid)
Entropy Regime Classification:
DRP classifies markets into five entropy regimes:
• CRYSTALLINE (H < 0.3): Maximum order, persistent trends
• ORDERED (H < 0.5): Clear patterns, momentum strategies work
• MODERATE (H < 0.7): Mixed dynamics, adaptive required
• COMPLEX (H < 0.85): High entropy, mean reversion better
• CHAOTIC (H ≥ 0.85): Near-random, minimize trading
Why Permutation Entropy?
Unlike traditional entropy methods requiring binning continuous data (losing information), permutation entropy:
• Works directly on time series
• Robust to monotonic transformations
• Computationally efficient
• Captures temporal structure, not just distribution
• Immune to outliers (uses ranks, not values)
⚡ LYAPUNOV EXPONENT: CHAOS vs STABILITY
The Lyapunov exponent λ measures sensitivity to initial conditions —the hallmark of chaos.
Physical Meaning:
Two trajectories starting infinitely close will diverge at exponential rate e^(λt):
Distance(t) ≈ Distance(0) × e^(λt)
Interpretation:
• λ > 0 : Positive Lyapunov exponent = CHAOS
- Small errors grow exponentially
- Long-term prediction impossible
- System is sensitive, unpredictable
- AVOID TRADING
• λ ≈ 0 : Near-zero = CRITICAL STATE
- Edge of chaos
- Transition zone between order and disorder
- Moderate predictability
- PROCEED WITH CAUTION
• λ < 0 : Negative Lyapunov exponent = STABLE
- Small errors decay
- Trajectories converge
- System is predictable
- OPTIMAL FOR TRADING
Estimation Method:
DRP estimates λ by tracking how quickly nearby states diverge over a rolling window (default 20 bars):
For each bar i in window:
δ₀ = |x - x | (initial separation)
δ₁ = |x - x | (previous separation)
if δ₁ > 0:
ratio = δ₀ / δ₁
log_ratios += ln(ratio)
λ ≈ average(log_ratios)
Stability Classification:
• STABLE : λ < 0 (negative growth rate)
• CRITICAL : |λ| < 0.1 (near neutral)
• CHAOTIC : λ > 0.2 (strong positive growth)
Signal Filtering:
By default, NEXUS requires λ < 0 (stable regime) for signal confirmation. This filters out trades during chaotic periods when technical patterns break down.
📐 HIGUCHI FRACTAL DIMENSION
Fractal dimension measures self-similarity and complexity of the price trajectory.
Theoretical Background:
A curve's fractal dimension D ranges from 1 (smooth line) to 2 (space-filling curve):
• D ≈ 1.0 : Smooth, persistent trending
• D ≈ 1.5 : Random walk (Brownian motion)
• D ≈ 2.0 : Highly irregular, space-filling
Higuchi Method (1988):
For a time series of length N, construct k different curves by taking every k-th point:
L(k) = (1/k) × Σ|x - x | × (N-1)/(⌊(N-m)/k⌋ × k)
For different values of k (1 to k_max), calculate L(k). The fractal dimension is the slope of log(L(k)) vs log(1/k):
D = slope of log(L) vs log(1/k)
Market Interpretation:
• D < 1.35 : Strong trending, persistent (Hurst > 0.5)
- TRENDING regime
- Momentum strategies favored
- Breakouts likely to continue
• D = 1.35-1.45 : Moderate persistence
- PERSISTENT regime
- Trend-following with caution
- Patterns have meaning
• D = 1.45-1.55 : Random walk territory
- RANDOM regime
- Efficiency hypothesis holds
- Technical analysis least reliable
• D = 1.55-1.65 : Anti-persistent (mean-reverting)
- ANTI-PERSISTENT regime
- Oscillator strategies work
- Overbought/oversold meaningful
• D > 1.65 : Highly complex, choppy
- COMPLEX regime
- Avoid directional bets
- Wait for regime change
Signal Filtering:
Resonance signals (secondary signal type) require D < 1.5, indicating trending or persistent dynamics where momentum has meaning.
🔗 TRANSFER ENTROPY: CAUSAL INFORMATION FLOW
Transfer entropy measures directed causal influence between time series—not just correlation, but actual information transfer.
Schreiber's Definition (2000):
Transfer entropy from X to Y measures how much knowing X's past reduces uncertainty about Y's future:
TE(X→Y) = H(Y_future | Y_past) - H(Y_future | Y_past, X_past)
Where H is Shannon entropy.
Key Properties:
1. Directional : TE(X→Y) ≠ TE(Y→X) in general
2. Non-linear : Detects complex causal relationships
3. Model-free : No assumptions about functional form
4. Lag-independent : Captures delayed causal effects
Three Causal Flows Measured:
1. Volume → Price (TE_V→P):
Measures how much volume patterns predict price changes.
• TE > 0 : Volume provides predictive information about price
- Institutional participation driving moves
- Volume confirms direction
- High reliability
• TE ≈ 0 : No causal flow (weak volume/price relationship)
- Volume uninformative
- Caution on signals
• TE < 0 (rare): Suggests price leading volume
- Potentially manipulated or thin market
2. Volatility → Momentum (TE_σ→M):
Does volatility expansion predict momentum changes?
• Positive TE : Volatility precedes momentum shifts
- Breakout dynamics
- Regime transitions
3. Structure → Price (TE_S→P):
Do support/resistance patterns causally influence price?
• Positive TE : Structural levels have causal impact
- Technical levels matter
- Market respects structure
Net Causal Flow:
Net_Flow = TE_V→P + 0.5·TE_σ→M + TE_S→P
• Net > +0.1 : Bullish causal structure
• Net < -0.1 : Bearish causal structure
• |Net| < 0.1 : Neutral/unclear causation
Causal Gate:
For signal confirmation, NEXUS requires:
• Buy signals : TE_V→P > 0 AND Net_Flow > 0.05
• Sell signals : TE_V→P > 0 AND Net_Flow < -0.05
This ensures volume is actually driving price (causal support exists), not just correlated noise.
Implementation Note:
Computing true transfer entropy requires discretizing continuous data into bins (default 6 bins) and estimating joint probability distributions. NEXUS uses a hybrid approach combining TE theory with autocorrelation structure and lagged cross-correlation to approximate information transfer in computationally efficient manner.
🌊 HILBERT PHASE COHERENCE
Phase coherence measures synchronization across market dimensions using Hilbert transform analysis.
Hilbert Transform Theory:
For a signal x(t), the Hilbert transform H (t) creates an analytic signal:
z(t) = x(t) + i·H (t) = A(t)·e^(iφ(t))
Where:
• A(t) = Instantaneous amplitude
• φ(t) = Instantaneous phase
Instantaneous Phase:
φ(t) = arctan(H (t) / x(t))
The phase represents where the signal is in its natural cycle—analogous to position on a unit circle.
Four Dimensions Analyzed:
1. Momentum Phase : Phase of price rate-of-change
2. Volume Phase : Phase of volume intensity
3. Volatility Phase : Phase of ATR cycles
4. Structure Phase : Phase of position within range
Phase Locking Value (PLV):
For two signals with phases φ₁(t) and φ₂(t), PLV measures phase synchronization:
PLV = |⟨e^(i(φ₁(t) - φ₂(t)))⟩|
Where ⟨·⟩ is time average over window.
Interpretation:
• PLV = 0 : Completely random phase relationship (no synchronization)
• PLV = 0.5 : Moderate phase locking
• PLV = 1 : Perfect synchronization (phases locked)
Pairwise PLV Calculations:
• PLV_momentum-volume : Are momentum and volume cycles synchronized?
• PLV_momentum-structure : Are momentum cycles aligned with structure?
• PLV_volume-structure : Are volume and structural patterns in phase?
Overall Phase Coherence:
Coherence = (PLV_mom-vol + PLV_mom-struct + PLV_vol-struct) / 3
Signal Confirmation:
Emergence signals require coherence ≥ threshold (default 0.70):
• Below 0.70: Dimensions not synchronized, no coherent market state
• Above 0.70: Dimensions in phase, coherent behavior emerging
Coherence Direction:
The summed phase angles indicate whether synchronized dimensions point bullish or bearish:
Direction = sin(φ_momentum) + 0.5·sin(φ_volume) + 0.5·sin(φ_structure)
• Direction > 0 : Phases pointing upward (bullish synchronization)
• Direction < 0 : Phases pointing downward (bearish synchronization)
🌀 EMERGENCE SCORE: MULTI-DIMENSIONAL ALIGNMENT
The emergence score aggregates all complexity metrics into a single 0-1 value representing market coherence.
Eight Components with Weights:
1. Phase Coherence (20%):
Direct contribution: coherence × 0.20
Measures dimensional synchronization.
2. Entropy Regime (15%):
Contribution: (0.6 - H_perm) / 0.6 × 0.15 if H < 0.6, else 0
Rewards low entropy (ordered, predictable states).
3. Lyapunov Stability (12%):
• λ < 0 (stable): +0.12
• |λ| < 0.1 (critical): +0.08
• λ > 0.2 (chaotic): +0.0
Requires stable, predictable dynamics.
4. Fractal Dimension Trending (12%):
Contribution: (1.45 - D) / 0.45 × 0.12 if D < 1.45, else 0
Rewards trending fractal structure (D < 1.45).
5. Dimensional Resonance (12%):
Contribution: |dimensional_resonance| × 0.12
Measures alignment across momentum, volume, structure, volatility dimensions.
6. Causal Flow Strength (9%):
Contribution: |net_causal_flow| × 0.09
Rewards strong causal relationships.
7. Phase Space Embedding (10%):
Contribution: min(|phase_magnitude_norm|, 3.0) / 3.0 × 0.10 if |magnitude| > 1.0
Rewards strong trajectory in reconstructed phase space.
8. Recurrence Quality (10%):
Contribution: determinism × 0.10 if DET > 0.3 AND 0.1 < RR < 0.8
Rewards deterministic patterns with moderate recurrence.
Total Emergence Score:
E = Σ(components) ∈
Capped at 1.0 maximum.
Emergence Direction:
Separate calculation determining bullish vs bearish:
• Dimensional resonance sign
• Net causal flow sign
• Phase magnitude correlation with momentum
Signal Threshold:
Default emergence_threshold = 0.75 means 75% of maximum possible emergence score required to trigger signals.
Why Emergence Matters:
Traditional indicators measure single dimensions. Emergence detects self-organization —when multiple independent dimensions spontaneously align. This is the market equivalent of a phase transition in physics, where microscopic chaos gives way to macroscopic order.
These are the highest-probability trade opportunities because the entire system is resonating in the same direction.
🎯 SIGNAL GENERATION: EMERGENCE vs RESONANCE
DRP generates two tiers of signals with different requirements:
TIER 1: EMERGENCE SIGNALS (Primary)
Requirements:
1. Emergence score ≥ threshold (default 0.75)
2. Phase coherence ≥ threshold (default 0.70)
3. Emergence direction > 0.2 (bullish) or < -0.2 (bearish)
4. Causal gate passed (if enabled): TE_V→P > 0 and net_flow confirms direction
5. Stability zone (if enabled): λ < 0 or |λ| < 0.1
6. Price confirmation: Close > open (bulls) or close < open (bears)
7. Cooldown satisfied: bars_since_signal ≥ cooldown_period
EMERGENCE BUY:
• All above conditions met with bullish direction
• Market has achieved coherent bullish state
• Multiple dimensions synchronized upward
EMERGENCE SELL:
• All above conditions met with bearish direction
• Market has achieved coherent bearish state
• Multiple dimensions synchronized downward
Premium Emergence:
When signal_quality (emergence_score × phase_coherence) > 0.7:
• Displayed as ★ star symbol
• Highest conviction trades
• Maximum dimensional alignment
Standard Emergence:
When signal_quality 0.5-0.7:
• Displayed as ◆ diamond symbol
• Strong signals but not perfect alignment
TIER 2: RESONANCE SIGNALS (Secondary)
Requirements:
1. Dimensional resonance > +0.6 (bullish) or < -0.6 (bearish)
2. Fractal dimension < 1.5 (trending/persistent regime)
3. Price confirmation matches direction
4. NOT in chaotic regime (λ < 0.2)
5. Cooldown satisfied
6. NO emergence signal firing (resonance is fallback)
RESONANCE BUY:
• Dimensional alignment without full emergence
• Trending fractal structure
• Moderate conviction
RESONANCE SELL:
• Dimensional alignment without full emergence
• Bearish resonance with trending structure
• Moderate conviction
Displayed as small ▲/▼ triangles with transparency.
Signal Hierarchy:
IF emergence conditions met:
Fire EMERGENCE signal (★ or ◆)
ELSE IF resonance conditions met:
Fire RESONANCE signal (▲ or ▼)
ELSE:
No signal
Cooldown System:
After any signal fires, cooldown_period (default 5 bars) must elapse before next signal. This prevents signal clustering during persistent conditions.
Cooldown tracks using bar_index:
bars_since_signal = current_bar_index - last_signal_bar_index
cooldown_ok = bars_since_signal >= cooldown_period
🎨 VISUAL SYSTEM: MULTI-LAYER COMPLEXITY
DRP provides rich visual feedback across four distinct layers:
LAYER 1: COHERENCE FIELD (Background)
Colored background intensity based on phase coherence:
• No background : Coherence < 0.5 (incoherent state)
• Faint glow : Coherence 0.5-0.7 (building coherence)
• Stronger glow : Coherence > 0.7 (coherent state)
Color:
• Cyan/teal: Bullish coherence (direction > 0)
• Red/magenta: Bearish coherence (direction < 0)
• Blue: Neutral coherence (direction ≈ 0)
Transparency: 98 minus (coherence_intensity × 10), so higher coherence = more visible.
LAYER 2: STABILITY/CHAOS ZONES
Background color indicating Lyapunov regime:
• Green tint (95% transparent): λ < 0, STABLE zone
- Safe to trade
- Patterns meaningful
• Gold tint (90% transparent): |λ| < 0.1, CRITICAL zone
- Edge of chaos
- Moderate risk
• Red tint (85% transparent): λ > 0.2, CHAOTIC zone
- Avoid trading
- Unpredictable behavior
LAYER 3: DIMENSIONAL RIBBONS
Three EMAs representing dimensional structure:
• Fast ribbon : EMA(8) in cyan/teal (fast dynamics)
• Medium ribbon : EMA(21) in blue (intermediate)
• Slow ribbon : EMA(55) in red/magenta (slow dynamics)
Provides visual reference for multi-scale structure without cluttering with raw phase space data.
LAYER 4: CAUSAL FLOW LINE
A thicker line plotted at EMA(13) colored by net causal flow:
• Cyan/teal : Net_flow > +0.1 (bullish causation)
• Red/magenta : Net_flow < -0.1 (bearish causation)
• Gray : |Net_flow| < 0.1 (neutral causation)
Shows real-time direction of information flow.
EMERGENCE FLASH:
Strong background flash when emergence signals fire:
• Cyan flash for emergence buy
• Red flash for emergence sell
• 80% transparency for visibility without obscuring price
📊 COMPREHENSIVE DASHBOARD
Real-time monitoring of all complexity metrics:
HEADER:
• 🌀 DRP branding with gold accent
CORE METRICS:
EMERGENCE:
• Progress bar (█ filled, ░ empty) showing 0-100%
• Percentage value
• Direction arrow (↗ bull, ↘ bear, → neutral)
• Color-coded: Green/gold if active, gray if low
COHERENCE:
• Progress bar showing phase locking value
• Percentage value
• Checkmark ✓ if ≥ threshold, circle ○ if below
• Color-coded: Cyan if coherent, gray if not
COMPLEXITY SECTION:
ENTROPY:
• Regime name (CRYSTALLINE/ORDERED/MODERATE/COMPLEX/CHAOTIC)
• Numerical value (0.00-1.00)
• Color: Green (ordered), gold (moderate), red (chaotic)
LYAPUNOV:
• State (STABLE/CRITICAL/CHAOTIC)
• Numerical value (typically -0.5 to +0.5)
• Status indicator: ● stable, ◐ critical, ○ chaotic
• Color-coded by state
FRACTAL:
• Regime (TRENDING/PERSISTENT/RANDOM/ANTI-PERSIST/COMPLEX)
• Dimension value (1.0-2.0)
• Color: Cyan (trending), gold (random), red (complex)
PHASE-SPACE:
• State (STRONG/ACTIVE/QUIET)
• Normalized magnitude value
• Parameters display: d=5 τ=3
CAUSAL SECTION:
CAUSAL:
• Direction (BULL/BEAR/NEUTRAL)
• Net flow value
• Flow indicator: →P (to price), P← (from price), ○ (neutral)
V→P:
• Volume-to-price transfer entropy
• Small display showing specific TE value
DIMENSIONAL SECTION:
RESONANCE:
• Progress bar of absolute resonance
• Signed value (-1 to +1)
• Color-coded by direction
RECURRENCE:
• Recurrence rate percentage
• Determinism percentage display
• Color-coded: Green if high quality
STATE SECTION:
STATE:
• Current mode: EMERGENCE / RESONANCE / CHAOS / SCANNING
• Icon: 🚀 (emergence buy), 💫 (emergence sell), ▲ (resonance buy), ▼ (resonance sell), ⚠ (chaos), ◎ (scanning)
• Color-coded by state
SIGNALS:
• E: count of emergence signals
• R: count of resonance signals
⚙️ KEY PARAMETERS EXPLAINED
Phase Space Configuration:
• Embedding Dimension (3-10, default 5): Reconstruction dimension
- Low (3-4): Simple dynamics, faster computation
- Medium (5-6): Balanced (recommended)
- High (7-10): Complex dynamics, more data needed
- Rule: d ≥ 2D+1 where D is true dimension
• Time Delay (τ) (1-10, default 3): Embedding lag
- Fast markets: 1-2
- Normal: 3-4
- Slow markets: 5-10
- Optimal: First minimum of mutual information (often 2-4)
• Recurrence Threshold (ε) (0.01-0.5, default 0.10): Phase space proximity
- Tight (0.01-0.05): Very similar states only
- Medium (0.08-0.15): Balanced
- Loose (0.20-0.50): Liberal matching
Entropy & Complexity:
• Permutation Order (3-7, default 4): Pattern length
- Low (3): 6 patterns, fast but coarse
- Medium (4-5): 24-120 patterns, balanced
- High (6-7): 720-5040 patterns, fine-grained
- Note: Requires window >> order! for stability
• Entropy Window (15-100, default 30): Lookback for entropy
- Short (15-25): Responsive to changes
- Medium (30-50): Stable measure
- Long (60-100): Very smooth, slow adaptation
• Lyapunov Window (10-50, default 20): Stability estimation window
- Short (10-15): Fast chaos detection
- Medium (20-30): Balanced
- Long (40-50): Stable λ estimate
Causal Inference:
• Enable Transfer Entropy (default ON): Causality analysis
- Keep ON for full system functionality
• TE History Length (2-15, default 5): Causal lookback
- Short (2-4): Quick causal detection
- Medium (5-8): Balanced
- Long (10-15): Deep causal analysis
• TE Discretization Bins (4-12, default 6): Binning granularity
- Few (4-5): Coarse, robust, needs less data
- Medium (6-8): Balanced
- Many (9-12): Fine-grained, needs more data
Phase Coherence:
• Enable Phase Coherence (default ON): Synchronization detection
- Keep ON for emergence detection
• Coherence Threshold (0.3-0.95, default 0.70): PLV requirement
- Loose (0.3-0.5): More signals, lower quality
- Balanced (0.6-0.75): Recommended
- Strict (0.8-0.95): Rare, highest quality
• Hilbert Smoothing (3-20, default 8): Phase smoothing
- Low (3-5): Responsive, noisier
- Medium (6-10): Balanced
- High (12-20): Smooth, more lag
Fractal Analysis:
• Enable Fractal Dimension (default ON): Complexity measurement
- Keep ON for full analysis
• Fractal K-max (4-20, default 8): Scaling range
- Low (4-6): Faster, less accurate
- Medium (7-10): Balanced
- High (12-20): Accurate, slower
• Fractal Window (30-200, default 50): FD lookback
- Short (30-50): Responsive FD
- Medium (60-100): Stable FD
- Long (120-200): Very smooth FD
Emergence Detection:
• Emergence Threshold (0.5-0.95, default 0.75): Minimum coherence
- Sensitive (0.5-0.65): More signals
- Balanced (0.7-0.8): Recommended
- Strict (0.85-0.95): Rare signals
• Require Causal Gate (default ON): TE confirmation
- ON: Only signal when causality confirms
- OFF: Allow signals without causal support
• Require Stability Zone (default ON): Lyapunov filter
- ON: Only signal when λ < 0 (stable) or |λ| < 0.1 (critical)
- OFF: Allow signals in chaotic regimes (risky)
• Signal Cooldown (1-50, default 5): Minimum bars between signals
- Fast (1-3): Rapid signal generation
- Normal (4-8): Balanced
- Slow (10-20): Very selective
- Ultra (25-50): Only major regime changes
Signal Configuration:
• Momentum Period (5-50, default 14): ROC calculation
• Structure Lookback (10-100, default 20): Support/resistance range
• Volatility Period (5-50, default 14): ATR calculation
• Volume MA Period (10-50, default 20): Volume normalization
Visual Settings:
• Customizable color scheme for all elements
• Toggle visibility for each layer independently
• Dashboard position (4 corners) and size (tiny/small/normal)
🎓 PROFESSIONAL USAGE PROTOCOL
Phase 1: System Familiarization (Week 1)
Goal: Understand complexity metrics and dashboard interpretation
Setup:
• Enable all features with default parameters
• Watch dashboard metrics for 500+ bars
• Do NOT trade yet
Actions:
• Observe emergence score patterns relative to price moves
• Note coherence threshold crossings and subsequent price action
• Watch entropy regime transitions (ORDERED → COMPLEX → CHAOTIC)
• Correlate Lyapunov state with signal reliability
• Track which signals appear (emergence vs resonance frequency)
Key Learning:
• When does emergence peak? (usually before major moves)
• What entropy regime produces best signals? (typically ORDERED or MODERATE)
• Does your instrument respect stability zones? (stable λ = better signals)
Phase 2: Parameter Optimization (Week 2)
Goal: Tune system to instrument characteristics
Requirements:
• Understand basic dashboard metrics from Phase 1
• Have 1000+ bars of history loaded
Embedding Dimension & Time Delay:
• If signals very rare: Try lower dimension (d=3-4) or shorter delay (τ=2)
• If signals too frequent: Try higher dimension (d=6-7) or longer delay (τ=4-5)
• Sweet spot: 4-8 emergence signals per 100 bars
Coherence Threshold:
• Check dashboard: What's typical coherence range?
• If coherence rarely exceeds 0.70: Lower threshold to 0.60-0.65
• If coherence often >0.80: Can raise threshold to 0.75-0.80
• Goal: Signals fire during top 20-30% of coherence values
Emergence Threshold:
• If too few signals: Lower to 0.65-0.70
• If too many signals: Raise to 0.80-0.85
• Balance with coherence threshold—both must be met
Phase 3: Signal Quality Assessment (Weeks 3-4)
Goal: Verify signals have edge via paper trading
Requirements:
• Parameters optimized per Phase 2
• 50+ signals generated
• Detailed notes on each signal
Paper Trading Protocol:
• Take EVERY emergence signal (★ and ◆)
• Optional: Take resonance signals (▲/▼) separately to compare
• Use simple exit: 2R target, 1R stop (ATR-based)
• Track: Win rate, average R-multiple, maximum consecutive losses
Quality Metrics:
• Premium emergence (★) : Should achieve >55% WR
• Standard emergence (◆) : Should achieve >50% WR
• Resonance signals : Should achieve >45% WR
• Overall : If <45% WR, system not suitable for this instrument/timeframe
Red Flags:
• Win rate <40%: Wrong instrument or parameters need major adjustment
• Max consecutive losses >10: System not working in current regime
• Profit factor <1.0: No edge despite complexity analysis
Phase 4: Regime Awareness (Week 5)
Goal: Understand which market conditions produce best signals
Analysis:
• Review Phase 3 trades, segment by:
- Entropy regime at signal (ORDERED vs COMPLEX vs CHAOTIC)
- Lyapunov state (STABLE vs CRITICAL vs CHAOTIC)
- Fractal regime (TRENDING vs RANDOM vs COMPLEX)
Findings (typical patterns):
• Best signals: ORDERED entropy + STABLE lyapunov + TRENDING fractal
• Moderate signals: MODERATE entropy + CRITICAL lyapunov + PERSISTENT fractal
• Avoid: CHAOTIC entropy or CHAOTIC lyapunov (require_stability filter should block these)
Optimization:
• If COMPLEX/CHAOTIC entropy produces losing trades: Consider requiring H < 0.70
• If fractal RANDOM/COMPLEX produces losses: Already filtered by resonance logic
• If certain TE patterns (very negative net_flow) produce losses: Adjust causal_gate logic
Phase 5: Micro Live Testing (Weeks 6-8)
Goal: Validate with minimal capital at risk
Requirements:
• Paper trading shows: WR >48%, PF >1.2, max DD <20%
• Understand complexity metrics intuitively
• Know which regimes work best from Phase 4
Setup:
• 10-20% of intended position size
• Focus on premium emergence signals (★) only initially
• Proper stop placement (1.5-2.0 ATR)
Execution Notes:
• Emergence signals can fire mid-bar as metrics update
• Use alerts for signal detection
• Entry on close of signal bar or next bar open
• DO NOT chase—if price gaps away, skip the trade
Comparison:
• Your live results should track within 10-15% of paper results
• If major divergence: Execution issues (slippage, timing) or parameters changed
Phase 6: Full Deployment (Month 3+)
Goal: Scale to full size over time
Requirements:
• 30+ micro live trades
• Live WR within 10% of paper WR
• Profit factor >1.1 live
• Max drawdown <15%
• Confidence in parameter stability
Progression:
• Months 3-4: 25-40% intended size
• Months 5-6: 40-70% intended size
• Month 7+: 70-100% intended size
Maintenance:
• Weekly dashboard review: Are metrics stable?
• Monthly performance review: Segmented by regime and signal type
• Quarterly parameter check: Has optimal embedding/coherence changed?
Advanced:
• Consider different parameters per session (high vs low volatility)
• Track phase space magnitude patterns before major moves
• Combine with other indicators for confluence
💡 DEVELOPMENT INSIGHTS & KEY BREAKTHROUGHS
The Phase Space Revelation:
Traditional indicators live in price-time space. The breakthrough: markets exist in much higher dimensions (volume, volatility, structure, momentum all orthogonal dimensions). Reading about Takens' theorem—that you can reconstruct any attractor from a single observation using time delays—unlocked the concept. Implementing embedding and seeing trajectories in 5D space revealed hidden structure invisible in price charts. Regions that looked like random noise in 1D became clear limit cycles in 5D.
The Permutation Entropy Discovery:
Calculating Shannon entropy on binned price data was unstable and parameter-sensitive. Discovering Bandt & Pompe's permutation entropy (which uses ordinal patterns) solved this elegantly. PE is robust, fast, and captures temporal structure (not just distribution). Testing showed PE < 0.5 periods had 18% higher signal win rate than PE > 0.7 periods. Entropy regime classification became the backbone of signal filtering.
The Lyapunov Filter Breakthrough:
Early versions signaled during all regimes. Win rate hovered at 42%—barely better than random. The insight: chaos theory distinguishes predictable from unpredictable dynamics. Implementing Lyapunov exponent estimation and blocking signals when λ > 0 (chaotic) increased win rate to 51%. Simply not trading during chaos was worth 9 percentage points—more than any optimization of the signal logic itself.
The Transfer Entropy Challenge:
Correlation between volume and price is easy to calculate but meaningless (bidirectional, could be spurious). Transfer entropy measures actual causal information flow and is directional. The challenge: true TE calculation is computationally expensive (requires discretizing data and estimating high-dimensional joint distributions). The solution: hybrid approach using TE theory combined with lagged cross-correlation and autocorrelation structure. Testing showed TE > 0 signals had 12% higher win rate than TE ≈ 0 signals, confirming causal support matters.
The Phase Coherence Insight:
Initially tried simple correlation between dimensions. Not predictive. Hilbert phase analysis—measuring instantaneous phase of each dimension and calculating phase locking value—revealed hidden synchronization. When PLV > 0.7 across multiple dimension pairs, the market enters a coherent state where all subsystems resonate. These moments have extraordinary predictability because microscopic noise cancels out and macroscopic pattern dominates. Emergence signals require high PLV for this reason.
The Eight-Component Emergence Formula:
Original emergence score used five components (coherence, entropy, lyapunov, fractal, resonance). Performance was good but not exceptional. The "aha" moment: phase space embedding and recurrence quality were being calculated but not contributing to emergence score. Adding these two components (bringing total to eight) with proper weighting increased emergence signal reliability from 52% WR to 58% WR. All calculated metrics must contribute to the final score. If you compute something, use it.
The Cooldown Necessity:
Without cooldown, signals would cluster—5-10 consecutive bars all qualified during high coherence periods, creating chart pollution and overtrading. Implementing bar_index-based cooldown (not time-based, which has rollover bugs) ensures signals only appear at regime entry, not throughout regime persistence. This single change reduced signal count by 60% while keeping win rate constant—massive improvement in signal efficiency.
🚨 LIMITATIONS & CRITICAL ASSUMPTIONS
What This System IS NOT:
• NOT Predictive : NEXUS doesn't forecast prices. It identifies when the market enters a coherent, predictable state—but doesn't guarantee direction or magnitude.
• NOT Holy Grail : Typical performance is 50-58% win rate with 1.5-2.0 avg R-multiple. This is probabilistic edge from complexity analysis, not certainty.
• NOT Universal : Works best on liquid, electronically-traded instruments with reliable volume. Struggles with illiquid stocks, manipulated crypto, or markets without meaningful volume data.
• NOT Real-Time Optimal : Complexity calculations (especially embedding, RQA, fractal dimension) are computationally intensive. Dashboard updates may lag by 1-2 seconds on slower connections.
• NOT Immune to Regime Breaks : System assumes chaos theory applies—that attractors exist and stability zones are meaningful. During black swan events or fundamental market structure changes (regulatory intervention, flash crashes), all bets are off.
Core Assumptions:
1. Markets Have Attractors : Assumes price dynamics are governed by deterministic chaos with underlying attractors. Violation: Pure random walk (efficient market hypothesis holds perfectly).
2. Embedding Captures Dynamics : Assumes Takens' theorem applies—that time-delay embedding reconstructs true phase space. Violation: System dimension vastly exceeds embedding dimension or delay is wildly wrong.
3. Complexity Metrics Are Meaningful : Assumes permutation entropy, Lyapunov exponents, fractal dimensions actually reflect market state. Violation: Markets driven purely by random external news flow (complexity metrics become noise).
4. Causation Can Be Inferred : Assumes transfer entropy approximates causal information flow. Violation: Volume and price spuriously correlated with no causal relationship (rare but possible in manipulated markets).
5. Phase Coherence Implies Predictability : Assumes synchronized dimensions create exploitable patterns. Violation: Coherence by chance during random period (false positive).
6. Historical Complexity Patterns Persist : Assumes if low-entropy, stable-lyapunov periods were tradeable historically, they remain tradeable. Violation: Fundamental regime change (market structure shifts, e.g., transition from floor trading to HFT).
Performs Best On:
• ES, NQ, RTY (major US index futures - high liquidity, clean volume data)
• Major forex pairs: EUR/USD, GBP/USD, USD/JPY (24hr markets, good for phase analysis)
• Liquid commodities: CL (crude oil), GC (gold), NG (natural gas)
• Large-cap stocks: AAPL, MSFT, GOOGL, TSLA (>$10M daily volume, meaningful structure)
• Major crypto on reputable exchanges: BTC, ETH on Coinbase/Kraken (avoid Binance due to manipulation)
Performs Poorly On:
• Low-volume stocks (<$1M daily volume) - insufficient liquidity for complexity analysis
• Exotic forex pairs - erratic spreads, thin volume
• Illiquid altcoins - wash trading, bot manipulation invalidates volume analysis
• Pre-market/after-hours - gappy, thin, different dynamics
• Binary events (earnings, FDA approvals) - discontinuous jumps violate dynamical systems assumptions
• Highly manipulated instruments - spoofing and layering create false coherence
Known Weaknesses:
• Computational Lag : Complexity calculations require iterating over windows. On slow connections, dashboard may update 1-2 seconds after bar close. Signals may appear delayed.
• Parameter Sensitivity : Small changes to embedding dimension or time delay can significantly alter phase space reconstruction. Requires careful calibration per instrument.
• Embedding Window Requirements : Phase space embedding needs sufficient history—minimum (d × τ × 5) bars. If embedding_dimension=5 and time_delay=3, need 75+ bars. Early bars will be unreliable.
• Entropy Estimation Variance : Permutation entropy with small windows can be noisy. Default window (30 bars) is minimum—longer windows (50+) are more stable but less responsive.
• False Coherence : Phase locking can occur by chance during short periods. Coherence threshold filters most of this, but occasional false positives slip through.
• Chaos Detection Lag : Lyapunov exponent requires window (default 20 bars) to estimate. Market can enter chaos and produce bad signal before λ > 0 is detected. Stability filter helps but doesn't eliminate this.
• Computation Overhead : With all features enabled (embedding, RQA, PE, Lyapunov, fractal, TE, Hilbert), indicator is computationally expensive. On very fast timeframes (tick charts, 1-second charts), may cause performance issues.
⚠️ RISK DISCLOSURE
Trading futures, forex, stocks, options, and cryptocurrencies involves substantial risk of loss and is not suitable for all investors. Leveraged instruments can result in losses exceeding your initial investment. Past performance, whether backtested or live, is not indicative of future results.
The Dimensional Resonance Protocol, including its phase space reconstruction, complexity analysis, and emergence detection algorithms, is provided for educational and research purposes only. It is not financial advice, investment advice, or a recommendation to buy or sell any security or instrument.
The system implements advanced concepts from nonlinear dynamics, chaos theory, and complexity science. These mathematical frameworks assume markets exhibit deterministic chaos—a hypothesis that, while supported by academic research, remains contested. Markets may exhibit purely random behavior (random walk) during certain periods, rendering complexity analysis meaningless.
Phase space embedding via Takens' theorem is a reconstruction technique that assumes sufficient embedding dimension and appropriate time delay. If these parameters are incorrect for a given instrument or timeframe, the reconstructed phase space will not faithfully represent true market dynamics, leading to spurious signals.
Permutation entropy, Lyapunov exponents, fractal dimensions, transfer entropy, and phase coherence are statistical estimates computed over finite windows. All have inherent estimation error. Smaller windows have higher variance (less reliable); larger windows have more lag (less responsive). There is no universally optimal window size.
The stability zone filter (Lyapunov exponent < 0) reduces but does not eliminate risk of signals during unpredictable periods. Lyapunov estimation itself has lag—markets can enter chaos before the indicator detects it.
Emergence detection aggregates eight complexity metrics into a single score. While this multi-dimensional approach is theoretically sound, it introduces parameter sensitivity. Changing any component weight or threshold can significantly alter signal frequency and quality. Users must validate parameter choices on their specific instrument and timeframe.
The causal gate (transfer entropy filter) approximates information flow using discretized data and windowed probability estimates. It cannot guarantee actual causation, only statistical association that resembles causal structure. Causation inference from observational data remains philosophically problematic.
Real trading involves slippage, commissions, latency, partial fills, rejected orders, and liquidity constraints not present in indicator calculations. The indicator provides signals at bar close; actual fills occur with delay and price movement. Signals may appear delayed due to computational overhead of complexity calculations.
Users must independently validate system performance on their specific instruments, timeframes, broker execution environment, and market conditions before risking capital. Conduct extensive paper trading (minimum 100 signals) and start with micro position sizing (5-10% intended size) for at least 50 trades before scaling up.
Never risk more capital than you can afford to lose completely. Use proper position sizing (0.5-2% risk per trade maximum). Implement stop losses on every trade. Maintain adequate margin/capital reserves. Understand that most retail traders lose money. Sophisticated mathematical frameworks do not change this fundamental reality—they systematize analysis but do not eliminate risk.
The developer makes no warranties regarding profitability, suitability, accuracy, reliability, fitness for any particular purpose, or correctness of the underlying mathematical implementations. Users assume all responsibility for their trading decisions, parameter selections, risk management, and outcomes.
By using this indicator, you acknowledge that you have read, understood, and accepted these risk disclosures and limitations, and you accept full responsibility for all trading activity and potential losses.
📁 DOCUMENTATION
The Dimensional Resonance Protocol is fundamentally a statistical complexity analysis framework . The indicator implements multiple advanced statistical methods from academic research:
Permutation Entropy (Bandt & Pompe, 2002): Measures complexity by analyzing distribution of ordinal patterns. Pure statistical concept from information theory.
Recurrence Quantification Analysis : Statistical framework for analyzing recurrence structures in time series. Computes recurrence rate, determinism, and diagonal line statistics.
Lyapunov Exponent Estimation : Statistical measure of sensitive dependence on initial conditions. Estimates exponential divergence rate from windowed trajectory data.
Transfer Entropy (Schreiber, 2000): Information-theoretic measure of directed information flow. Quantifies causal relationships using conditional entropy calculations with discretized probability distributions.
Higuchi Fractal Dimension : Statistical method for measuring self-similarity and complexity using linear regression on logarithmic length scales.
Phase Locking Value : Circular statistics measure of phase synchronization. Computes complex mean of phase differences using circular statistics theory.
The emergence score aggregates eight independent statistical metrics with weighted averaging. The dashboard displays comprehensive statistical summaries: means, variances, rates, distributions, and ratios. Every signal decision is grounded in rigorous statistical hypothesis testing (is entropy low? is lyapunov negative? is coherence above threshold?).
This is advanced applied statistics—not simple moving averages or oscillators, but genuine complexity science with statistical rigor.
Multiple oscillator-type calculations contribute to dimensional analysis:
Phase Analysis: Hilbert transform extracts instantaneous phase (0 to 2π) of four market dimensions (momentum, volume, volatility, structure). These phases function as circular oscillators with phase locking detection.
Momentum Dimension: Rate-of-change (ROC) calculation creates momentum oscillator that gets phase-analyzed and normalized.
Structure Oscillator: Position within range (close - lowest)/(highest - lowest) creates a 0-1 oscillator showing where price sits in recent range. This gets embedded and phase-analyzed.
Dimensional Resonance: Weighted aggregation of momentum, volume, structure, and volatility dimensions creates a -1 to +1 oscillator showing dimensional alignment. Similar to traditional oscillators but multi-dimensional.
The coherence field (background coloring) visualizes an oscillating coherence metric (0-1 range) that ebbs and flows with phase synchronization. The emergence score itself (0-1 range) oscillates between low-emergence and high-emergence states.
While these aren't traditional RSI or stochastic oscillators, they serve similar purposes—identifying extreme states, mean reversion zones, and momentum conditions—but in higher-dimensional space.
Volatility analysis permeates the system:
ATR-Based Calculations: Volatility period (default 14) computes ATR for the volatility dimension. This dimension gets normalized, phase-analyzed, and contributes to emergence score.
Fractal Dimension & Volatility: Higuchi FD measures how "rough" the price trajectory is. Higher FD (>1.6) correlates with higher volatility/choppiness. FD < 1.4 indicates smooth trends (lower effective volatility).
Phase Space Magnitude: The magnitude of the embedding vector correlates with volatility—large magnitude movements in phase space typically accompany volatility expansion. This is the "energy" of the market trajectory.
Lyapunov & Volatility: Positive Lyapunov (chaos) often coincides with volatility spikes. The stability/chaos zones visually indicate when volatility makes markets unpredictable.
Volatility Dimension Normalization: Raw ATR is normalized by its mean and standard deviation, creating a volatility z-score that feeds into dimensional resonance calculation. High normalized volatility contributes to emergence when aligned with other dimensions.
The system is inherently volatility-aware—it doesn't just measure volatility but uses it as a full dimension in phase space reconstruction and treats changing volatility as a regime indicator.
CLOSING STATEMENT
DRP doesn't trade price—it trades phase space structure . It doesn't chase patterns—it detects emergence . It doesn't guess at trends—it measures coherence .
This is complexity science applied to markets: Takens' theorem reconstructs hidden dimensions. Permutation entropy measures order. Lyapunov exponents detect chaos. Transfer entropy reveals causation. Hilbert phases find synchronization. Fractal dimensions quantify self-similarity.
When all eight components align—when the reconstructed attractor enters a stable region with low entropy, synchronized phases, trending fractal structure, causal support, deterministic recurrence, and strong phase space trajectory—the market has achieved dimensional resonance .
These are the highest-probability moments. Not because an indicator said so. Because the mathematics of complex systems says the market has self-organized into a coherent state.
Most indicators see shadows on the wall. DRP reconstructs the cave.
"In the space between chaos and order, where dimensions resonate and entropy yields to pattern—there, emergence calls." DRP
Taking you to school. — Dskyz, Trade with insight. Trade with anticipation.
iQFFTLibrary "iQFFT"
TODO: add library description here
fft(x, y, dir)
Parameters:
x (array)
y (array)
dir (string)
fftPower(x, y)
Parameters:
x (array)
y (array)
fftFreq(N)
Parameters:
N (int)
Simple Grid Trading v1.0 [PUCHON]Simple Grid Trading v1.0
Overview
This is a Long-Only Grid Trading Strategy developed in Pine Script v6 for TradingView. It is designed to profit from market volatility by placing a series of Buy Limit orders at predefined price levels. As the price drops, the strategy accumulates positions. As the price rises, it sells these positions at a profit.
Features
Grid Types : Supports both Arithmetic (equal price spacing) and Geometric (equal percentage spacing) grids.
Flexible Order Management : Uses strategy.order for precise control and prevents duplicate orders at the same level.
Performance Dashboard : A real-time table displaying key metrics like Capital, Cashflow, and Drawdown.
Advanced Metrics : Includes Max Drawdown (MaxDD) , Avg Monthly Return , and CAGR calculations.
Customizable : Fully adjustable price range, grid lines, and lot size.
Dashboard Metrics
The dashboard (default: Bottom Right) provides a quick snapshot of the strategy's performance:
Initial Capital : The starting capital defined in the strategy settings.
Lot Size : The fixed quantity of assets purchased per grid level.
Avg. Profit per Grid : The average realized profit for each closed trade.
Cashflow : The total realized net profit (closed trades only).
MaxDD : Maximum Drawdown . The largest percentage drop in equity (realized + unrealized) from a peak.
Avg Monthly Return : The average percentage return generated per month.
CAGR : Compound Annual Growth Rate . The mean annual growth rate of the investment over the specified time period.
Strategy Settings (Inputs)
Grid Settings
Upper Price : The highest price level for the grid.
Lower Price : The lowest price level for the grid.
Number of Grid Lines : The total number of levels (lines) in the grid.
Grid Type :
Arithmetic: Distance between lines is fixed in price terms (e.g., $10, $20, $30).
Geometric: Distance between lines is fixed in percentage terms (e.g., 1%, 2%, 3%).
Lot Size : The fixed amount of the asset to buy at each level.
Dashboard Settings
Show Dashboard : Toggle to hide/show the performance table.
Position : Choose where the dashboard appears on the chart (e.g., Bottom Right, Top Left).
How It Works
Initialization : On the first bar, the script calculates the price levels based on your Upper/Lower price and Grid Type.
Entry Logic :
The strategy places Buy Limit orders at every grid level below the current price.
It checks if a position already exists at a specific level to avoid "stacking" multiple orders on the same line.
Exit Logic :
For every Buy order, a corresponding Sell Limit (Take Profit) order is placed at the next higher grid level.
MaxDD Calculation :
The script continuously tracks the highest equity peak.
It calculates the drawdown on every bar (including intra-bar movements) to ensure accuracy.
Displayed as a percentage (e.g., 5.25%).
Disclaimer
This script is for educational and backtesting purposes only. Grid trading involves significant risk, especially in strong trending markets where the price may move outside your grid range. Always use proper risk management.
SPY/QQQ Customizable Price ConverterThis is a minimalist utility tool designed for Index traders (SPX, NDX, RUT). It allows you to monitor the price of a reference asset (like SPY, QQQ) directly on your main chart without cluttering your screen.
Key Features:
1.🖱️ Crosshair Sync for Historical Data (Highlight): Unlike simple info tables that only show the latest price, this script allows for historical inspection.
· How it works: Simply move your mouse crosshair over ANY historical candle on your chart.
· The script will instantly display the closing price of the reference asset (e.g., SPY) for that specific time in the Status Line (top-left) or the Data Window. Perfect for backtesting and reviewing price action.
2.🔄 Fully Customizable Ticker: Default is set to SPY, but you can change it to anything in the settings.
e.g.
· Trading NDX Change it to QQQ.
· Trading RUT Change it to IWM.
3.📊 Clean Real-Time Dashboard:
· A floating table displays the current real-time price of your reference asset.
· Color-coded text (Green/Red) indicates price movement.
· Fully customizable size, position, and colors to fit your layout.
Debt-Cycle vs Bitcoin-CycleDebt-Cycle vs Bitcoin-Cycle Indicator
The Debt-Cycle vs Bitcoin-Cycle indicator is a macro-economic analysis tool that compares traditional financial market cycles (debt/credit cycles) against Bitcoin market cycles. It uses Z-score normalization to track the relative positioning of global financial conditions versus cryptocurrency market sentiment, helping identify potential turning points and divergences between traditional finance and digital assets.
Key Features
Dual-Cycle Analysis: Simultaneously tracks traditional financial cycles and Bitcoin-specific cycles
Z-Score Normalization: Standardizes diverse data sources for meaningful comparison
Multi-Asset Coverage: Analyzes currencies, commodities, bonds, monetary aggregates, and on-chain metrics
Divergence Detection: Identifies when Bitcoin cycles move independently from traditional finance
21-Day Timeframe: Optimized for Long-term cycle analysis
What It Measures
Finance-Cycle (White Line)
Tracks traditional financial market health through:
Currencies: USD strength (DXY), global currency weights (USDWCU, EURWCU)
Commodities: Oil, gold, natural gas, agricultural products, and Bitcoin price
Corporate Bonds: Investment-grade spreads, high-yield spreads, credit conditions
Monetary Aggregates: M2 money supply, foreign exchange reserves (weighted by currency)
Treasury Bonds: Yield curve (2Y/10Y, 3M/10Y), term premiums, long-term rates
Bitcoin-Cycle (Orange Line)
Tracks Bitcoin market positioning through:
On-Chain Metrics:
MVRV Ratio (Market Value to Realized Value)
NUPL (Net Unrealized Profit/Loss)
Profit/Loss Address Distribution
Technical Indicators:
Bitcoin price Z-score
Moving average deviation
Relative Strength:
ETH/BTC ratio (altcoin strength indicator)
Visual Elements
White Line: Finance-Cycle indicator (positive = expansionary conditions, negative = contractionary)
Orange Line: Bitcoin-Cycle indicator (positive = bullish positioning, negative = bearish)
Zero Line: Neutral reference point
Interpretation
Cycle Alignment
Both positive: Risk-on environment, favorable for crypto
Both negative: Risk-off environment, caution warranted
Divergence: Potential opportunities or warning signals
Divergence Signals
Finance positive, Bitcoin negative: Bitcoin may be undervalued relative to macro conditions
Finance negative, Bitcoin positive: Bitcoin may be overextended or decoupling from traditional finance
Important Limitations
This indicator uses some technical and macro data but still has significant gaps:
⚠️ Limited monetary data - missing:
Funding rates (repo, overnight markets)
Comprehensive bond spread analysis
Collateral velocity and quality metrics
Central bank balance sheet details
⚠️ Basic economic coverage - missing:
GDP growth rates
Inflation expectations
Employment data
Manufacturing indices
Consumer confidence
⚠️ Simplified on-chain analysis - missing:
Exchange flow data
Whale wallet movements
Mining difficulty adjustments
Hash rate trends
Network fee dynamics
⚠️ No sentiment data - missing:
Fear & Greed Index
Options positioning
Futures open interest
Social media sentiment
The indicator provides a high-level cycle comparison but should be combined with comprehensive fundamental analysis, detailed on-chain research, and proper risk management.
Settings
Offset: Adjust the horizontal positioning of the indicators (default: 0)
Timeframe: Fixed at 21 days for optimal cycle detection
Use Cases
Macro-crypto correlation analysis: Understand when Bitcoin moves with or against traditional markets
Cycle timing: Identify potential tops and bottoms in both cycles
Risk assessment: Gauge overall market conditions across asset classes
Divergence trading: Spot opportunities when cycles diverge significantly
Portfolio allocation: Balance traditional and crypto assets based on cycle positioning
Technical Notes
Uses Z-score normalization with varying lookback periods (40-60 bars)
Applies HMA (Hull Moving Average) smoothing to reduce noise
Asymmetric multipliers for upside/downside movements in certain metrics
Requires access to FRED economic data, Glassnode, CoinMetrics, and IntoTheBlock feeds
21-day timeframe optimized for cycle analysis
Strategy Applications
This indicator is particularly useful for:
Cross-asset allocation - Decide between traditional finance and crypto exposure
Cycle positioning - Identify where we are in credit/debt cycles vs. Bitcoin cycles
Regime changes - Detect shifts in market leadership and correlation patterns
Risk management - Reduce exposure when both cycles turn negative
Disclaimer: This indicator is a cycle analysis tool and should not be used as the sole basis for investment decisions. It has limited coverage of monetary conditions, economic fundamentals, and on-chain metrics. The indicator provides directional insight but cannot predict exact timing or magnitude of market moves. Always conduct thorough research, consider multiple data sources, and maintain proper risk management in all investment decisions.
Market Sentiment [NeuraAlgo]
Market Sentiment
This indicator provides a real-time view of market momentum and sentiment by analyzing bullish and bearish impulses using price and volatility-based calculations. It visualizes trends on the chart and offers a dashboard with key statistics.
1.Status Calculation
The Status measures bullish momentum by identifying strong upward impulses.
Equation:
Status Source = Average of lows where(Low - High ) > ATR
For each bar, it checks if the current low minus the high from two bars ago exceeds the Average True Range (ATR) .
All lows that satisfy this condition are collected.
The average of these lows forms the Status Source , representing the level of strong buying pressure.
This helps traders visualize where significant bullish activity is concentrated and gauge upward momentum.
2.Status Source Calculation
Similarly, bearish impulses are detected by checking if highs fall below lows from two bars ago beyond ATR thresholds. The corresponding levels form the reference for selling pressure.
3. Trend Strength and States
Strength is Quantifies how far the price is from bullish or bearish reference levels as a percentage.
Trend States
Stability Phase (Gray): Market is quiet, minimal momentum.
Positive Flow (Green): Bullish pressure dominates; buyers are in control.
Negative Flow (Red): Bearish pressure dominates; sellers lead.
State Transition: Market is shifting; momentum is building.
4. Visuals
Bar colors indicate trend state: green for bullish, red for bearish, gray for neutral.
Filled zones highlight bullish and bearish reference levels for intuitive trend analysis.
5. Dashboard
An optional dashboard displays:
Sentiment: Visual gradient representing bullish or bearish dominance.
Status: Current trend state in concise, human-readable terms.
6. Purpose:
This indicator is designed to identify the current market status and the behavior of the asset by analyzing bullish and bearish impulses. It helps traders understand whether the market shows signs of stability, growth, or decline based on the asset’s price action and volatility.
Understand the asset behavior
Healthy asset behavior
Weak asset behavior
Market Sentiment combines price action, ATR-based volatility, and impulse tracking to provide a clear and actionable view of market conditions. The BullLine equation ensures that only meaningful bullish moves are highlighted, giving traders a reliable reference for momentum and potential entry points.
2s10s Bull/Bear Steepener/Flattener (Intraday bars)A simple indicator that tracks the curve of the US2y and US10y
Z-Score IndicatorThis script calculates the Z-Score to measure how many standard deviations the current price is from its mean (SMA). It is a classic tool for identifying statistical extremes and mean reversion opportunities.
Formula Z = (Close - Mean) / Standard Deviation
Visual Guide
Blue Line: The Z-Score value.
Red Dotted Lines (+/- 2): Statistical extremes.
> 2: Potentially Overbought.
< -2: Potentially Oversold.
Grey Dotted Line (0): The mean (fair value).
Settings
Lookback Period: Default is 30. Adjust to change sensitivity.
Highlight TimeHighlight Time shades the chart background during user‑defined hours. Choose start/end times and a time zone to visually mark key trading windows like the spread hour.
Kernel Channel [BackQuant]Kernel Channel
A non-parametric, kernel-weighted trend channel that adapts to local structure, smooths noise without lagging like moving averages, and highlights volatility compressions, expansions, and directional bias through a flexible choice of kernels, band types, and squeeze logic.
What this is
This indicator builds a full trend channel using kernel regression rather than classical averaging. Instead of a simple moving average or exponential weighting, the midline is computed as a kernel-weighted expectation of past values. This allows it to adapt to local shape, give more weight to nearby bars, and reduce distortion from outliers.
You can think of it as a sliding local smoother where you define both the “window” of influence (Window Length) and the “locality strength” (Bandwidth). The result is a flexible midline with optional upper and lower bands derived from kernel-weighted ATR or kernel-weighted standard deviation, letting you visualize volatility in a structurally consistent way.
Three plotting modes help demonstrate this difference:
When the midline is shown alone, you get a smooth, adaptive baseline that behaves almost like a regression moving average, as shown in this view:
When full channels are enabled, you see how standard deviation reacts to local structure with dynamically widening and tightening bands, a mode illustrated here:
When ATR mode is chosen instead of StdDev, band width reflects breadth of movement rather than variance, creating a volatility-aware envelope like the example here:
Why kernels
Classical moving averages allocate fixed weights. Kernels let the user define weighting shape:
Epanechnikov — emphasizes bars near the current bar, fades fast, stable and smooth.
Triangular — linear decay, simple and responsive.
Laplacian — exponential decay from the current point, sharper reactivity.
Cosine — gentle periodic decay, balanced smoothness for trend filters.
Using these in combination with a bandwidth parameter gives fine control over smoothness vs responsiveness. Smaller bandwidths give sharper local sensitivity, larger bandwidths give smoother curvature.
How it works (core logic)
The indicator computes three building blocks:
1) Kernel-weighted midline
For every bar, a sliding window looks back Window Length bars. Each bar in this window receives a kernel weight depending on:
its index distance from the present
the chosen kernel shape
the bandwidth parameter (locality)
Weights form the denominator, weighted values form the numerator, and the resulting ratio is the kernel regression mean. This midline is the central trend.
2) Kernel-based width
You choose one of two band types:
Kernel ATR — ATR values are kernel-averaged, producing a smooth, volatility-based width that is not dependent on variance. Ideal for directional trend channels and regime separation.
Kernel StdDev — local variance around the midline is computed through kernel weighting. This produces a true statistical envelope that narrows in quiet periods and widens in noisy areas.
Width is scaled using Band Multiplier , controlling how far the envelope extends.
3) Upper and lower channels
Provided midline and width exist, the channel edges are:
Upper = midline + bandMult × width
Lower = midline − bandMult × width
These create smooth structures around price that adapt continuously.
Plotting modes
The indicator supports multiple visual styles depending on what you want to emphasize.
When only the midline is displayed, you get a pure kernel trend: a smooth regression-like curve that reacts to local structure while filtering noise, demonstrated here: This provides a clean read on direction and slope.
With full channels enabled, the behavior of the bands becomes visible. Standard deviation mode creates elastic boundaries that tighten during compressions and widen during turbulence, which you can see in the band-focused demonstration: This helps identify expansion events, volatility clusters, and breakouts.
ATR mode shifts interpretation from statistical variance to raw movement amplitude. This makes channels less sensitive to outliers and more consistent across trend phases, as shown in this ATR variation example: This mode is particularly useful for breakout systems and bar-range regimes.
Regime detection and bar coloring
The slope of the midline defines directional bias:
Up-slope → green
Down-slope → red
Flat → gray
A secondary regime filter compares close to the channel:
Trend Up Strong — close above upper band and midline rising.
Trend Down Strong — close below lower band and midline falling.
Trend Up Weak — close between midline and upper band with rising slope.
Trend Down Weak — close between lower band and midline with falling slope.
Compression mode — squeeze conditions.
Bar coloring is optional and can be toggled for cleaner charts.
Squeeze logic
The indicator includes non-standard squeeze detection based on relative width , defined as:
width / |midline|
This gives a dimensionless measure of how “tight” or “loose” the channel is, normalized for trend level.
A rolling window evaluates the percentile rank of current width relative to past behavior. If the width is in the lowest X% of its last N observations, the script flags a squeeze environment. This highlights compression regions that may precede breakouts or regime shifts.
Deviation highlighting
When using Kernel StdDev mode, you may enable deviation flags that highlight bars where price moves outside the channel:
Above upper band → bullish momentum overextension
Below lower band → bearish momentum overextension
This is turned off in ATR mode because ATR widths do not represent distributional variance.
Alerts included
Kernel Channel Long — midline turns up.
Kernel Channel Short — midline turns down.
Price Crossed Midline — crossover or crossunder of the midline.
Price Above Upper — early momentum expansion.
Price Below Lower — downward volatility expansion.
These help automate regime changes and breakout detection.
How to use it
Trend identification
The midline acts as a bias filter. Rising midline means trend strength upward, falling midline means downward behavior. The channel width contextualizes confidence.
Breakout anticipation
Kernel StdDev compressions highlight areas where price is coiling. Breakouts often follow narrow relative width. ATR mode provides structural expansion cues that are smooth and robust.
Mean reversion
StdDev mode is suitable for fade setups. Moves to outer bands during low volatility often revert to the midline.
Continuation logic
If price breaks above the upper band while midline is rising, the indicator flags strong directional expansion. Same logic for breakdowns on the lower band.
Volatility characterization
Kernel ATR maps raw bar movements and is excellent for identifying regime shifts in markets where variance is unstable.
Tuning guidance
For smoother long-term trend tracking
Larger window (150–300).
Moderate bandwidth (1.0–2.0).
Epanechnikov or Cosine kernel.
ATR mode for stable envelopes.
For swing trading / short-term structure
Window length around 50–100.
Bandwidth 0.6–1.2.
Triangular for speed, Laplacian for sharper reactions.
StdDev bands for precise volatility compression.
For breakout systems
Smaller bandwidth for sharp local detection.
ATR mode for stable envelopes.
Enable squeeze highlighting for identifying setups early.
For mean-reversion systems
Use StdDev bands.
Moderate window length.
Highlight deviations to locate overextended bars.
Settings overview
Kernel Settings
Source
Window Length
Bandwidth
Kernel Type (Epanechnikov, Triangular, Laplacian, Cosine)
Channel Width
Band Type (Kernel ATR or Kernel StdDev)
Band Multiplier
Visuals
Show Bands
Color Bars By Regime
Highlight Squeeze Periods
Highlight Deviation
Lookback and Percentile settings
Colors for uptrend, downtrend, squeeze, flat
Trading applications
Trend filtering — trade only in direction of the midline slope.
Breakout confirmation — expansion outside the bands while slope agrees.
Squeeze timing — compression periods often precede the next directional leg.
Volatility-aware stops — ATR mode makes channel edges suitable for adaptive stop placement.
Structural swing mapping — StdDev bands help locate midline pullbacks vs distributional extremes.
Bias rotation — bar coloring highlights when regime shifts occur.
Notes
The Kernel Channel is not a signal generator by itself, but a structural map. It helps classify trend direction, volatility environment, distribution shape, and compression cycles. Combine it with your entry and exit framework, risk parameters, and higher-timeframe confirmation.
It is designed to behave consistently across markets, to avoid the bluntness of classical averages, and to reveal subtle curvature in price that traditional channels miss. Adjust kernel type, bandwidth, and band source to match the noise profile of your instrument, then use squeeze logic and deviation highlighting to guide timing.
CME Gap Tracker + Live StatisticsThis script automatically finds the gaps inherent in the time data of any given chart, and displays them in color-coated buckets of how long it takes for the close of the gap to get filled. Add it on any CME Futures chart on the daily, and it will find all the weekend gaps. Set your period to an hour, and it will find the intraday gaps. Also displays a statistical calculation for each bucket.
OTT Volatility [RunRox]📊 OTT Volatility is an indicator developed by the RunRox team to pinpoint the most optimal time to trade across different markets.
OTT stands for Optimal Trade Time Volatility and is designed primarily for markets without a fixed trading session, such as cryptocurrencies that trade 24/7. At the same time, it works equally well on any other market.
🔶 The concept is straightforward. The indicator takes a specified number of historical periods (Samples) and statistically evaluates which hours of the day or which days show the highest volatility for the selected asset.
As a result, it highlights time windows with elevated volatility where traders can focus on searching for trade setups and building positions.
🔶 As the core volatility metric, the indicator uses ATR (Average True Range) to measure intraday volatility. Then it calculates the average ATR value over the last N Samples, creating a statistically stable estimate of typical volatility for the selected asset.
🔶 Statistically, during these highlighted periods the market shows higher-than-average volatility.
This means that in these time windows price is more likely to be subject to stronger moves and potential manipulation, making them attractive for active trade execution and position management.
⚠️ However, historical behavior does not guarantee future results.
These periods should be treated only as zones where volatility has a higher probability of being above normal, not as a promise of movement.
As shown in the screenshot above, the indicator also projects potential future volatility based on historical data. This helps you better plan your trading hours and align your activity with periods where volatility is statistically expected to be higher or lower.
🔶 Current Volatility – as shown in the screenshot above, you can also monitor the real-time volatility of the market without any statistical averaging.
On top of that, you can overlay the current volatility on top of the statistical volatility levels, which makes it easy to see whether the market is now trading in a high- or low-volatility regime relative to its usual behavior.
4 display modes – you can choose any visualization style that fits your trading workflow:
Absolute – displays the raw volatility values.
Relative – shows volatility relative to its typical levels.
Average Centered – centers volatility around its average value.
Trim Low Value – filters out low-volatility noise and highlights only more significant moves.
This indicator helps you define the most effective trading hours on any market by relying on historical volatility statistics.
Use it to quickly see when your market tends to be more active and to structure your trading sessions around those periods.
✅ We hope this tool becomes a useful part of your trading toolkit and helps you improve the quality of your decisions and timing.
Trend Continuation [OmegaTools]Trend Continuation is a trend-following and trend-continuation tool designed to highlight high-probability pullbacks within an existing directional bias. It helps discretionary and systematic traders visually isolate “continuation zones” where a retracement is more likely to resolve in favor of the prevailing trend rather than trigger a full reversal.
1. Concept and Objective
The indicator combines two key components:
1. A trend bias engine (based either on a Rolling VWAP regime or on swing market structure).
2. A pullback pressure model, which quantifies how deep and “aggressive” the recent retracement has been relative to the trend.
The goal is to identify moments where the market pulls back against the trend, builds enough “reversal pressure,” and then shows signs that the trend is likely to **continue** rather than flip. When specific conditions are met, the indicator highlights bars and plots reference levels that can be used as potential continuation zones, filters, or confluence areas in a broader trading plan.
2. Trend Bias Modes
The primary trend direction is defined through the `Trend Mode` input:
* **RVWAP Mode (default)**
The script computes two rolling volume-weighted average prices over different lengths:
* A **shorter-term rolling VWAP**
* A **longer-term rolling VWAP**
When the shorter RVWAP is above the longer one, the bias is set to **bullish (+1)**. When it is below, the bias is **bearish (-1)**.
This creates a smooth, volume-weighted trend definition that tends to adapt to shifting regimes and filters out minor noise.
* **Market Structure Mode**
In this mode, trend bias is derived from **pivot highs and lows**:
* When price breaks above a recent pivot high, the bias flips to **bullish (+1)**.
* When price breaks below a recent pivot low, the bias flips to **bearish (-1)**.
This approach is more structurally oriented and reacts to significant swing breaks rather than just moving-average style relationships.
If no clear condition is met, the internal bias can temporarily be neutral, though the main design assumes working with clearly bullish or bearish environments.
3. Pullback and Reversal Pressure Logic
Once the trend bias is defined, the indicator measures **pullback intensity** against that trend:
* A **lookback window (“Pullback Length”)** scans recent highs and lows:
* In an uptrend, it tracks the **highest high** over the window and measures how far the current low pulls back from that high.
* In a downtrend, it tracks the **lowest low** and measures how far the current high bounces up from that low.
* This distance is converted into a **“reversal pressure” value**:
* In a bullish bias, deeper pullbacks (lower lows relative to the recent high) indicate stronger counter-trend pressure.
* In a bearish bias, stronger rallies (higher highs relative to the recent low) indicate stronger counter-trend pressure.
The raw reversal pressure is then smoothed with a long-term moving average to separate normal retracements from **statistically significant extremes**.
4. Thresholds and Histogram Coloring
To avoid reacting to every minor pullback, the indicator builds a **dynamic threshold** using a combination of:
* Long-term averages of reversal pressure.
* Standard deviation of reversal pressure.
* High-percentile values of reversal behavior over different sample sizes.
From this, a **threshold line** is derived, and the script then compares the current reversal pressure to this adaptive level:
* The **Reversal Histogram** (column plot) represents the excess reversal pressure above its own long-term average.
* When:
* There is a valid bullish or bearish bias, and
* The histogram is above the dynamic threshold,
the bars of the histogram are **colored**:
* Blue (or a similar “positive” color) in bullish bias.
* Red/pink (or a similar “negative” color) in bearish bias.
* When reversal pressure is below threshold or bias is not relevant, the histogram remains **neutral gray**.
These colored histogram segments represent **“high-tension” pullback states**, where counter-trend pressure has reached an extreme that, historically, often resolves with the original trend continuing rather than fully reversing.
5. Continuation Level and Bar Coloring on Price Chart
To connect the oscillator logic back to the chart:
* A **continuation reference level** is computed on the price series:
* In an uptrend, this is derived by subtracting the threshold from recent highs.
* In a downtrend, it is derived by adding the threshold to recent lows.
* This level is plotted as a **line on the price chart** (only when the trend bias is stable), acting as a visual guide for:
* Potential continuation zones,
* Possible stop-placement or invalidation areas,
* Or filters for entries/exits.
The bars are then **colored** when price crosses or interacts with these levels in the direction of the trend:
* In a bullish bias, bars closing below the continuation level can be highlighted as potential **deep pullback/continuation opportunities** or as warning signals, depending on the user’s playbook.
* In a bearish bias, bars closing above the continuation level are similarly highlighted.
This makes it easy to see where the oscillator’s “extreme pullback” conditions align with structural movements on the actual price bars.
6. Embedded Win-Rate Estimation (WR Table)
The script also includes an internal **win-rate style metric (WR%)** displayed in a small table on the chart:
* It tracks occurrences where:
* A valid bullish or bearish bias is present, and
* The Reversal Histogram is **above the threshold** (i.e., histogram is colored).
* It then approximates the **probability that the trend bias does not change** following such high-pressure pullback events.
* The WR value is shown as a percentage and represents, in essence, the **historical trend-continuation rate** under these specific conditions over the most recent sample of events.
This is not a formal statistical test and does not guarantee future performance, but it provides a quick visual indication of how often these continuation setups have led to **trend persistence** in the recent past.
7. How to Use in Practice
Typical applications include:
Trend-following entries on pullbacks
Identify the main trend using either RVWAP or Market Structure mode.
Wait for a colored histogram bar (reversal pressure above threshold).
Use the continuation reference line and bar coloring on the price chart to refine entry zones or invalidation levels.
Filtering signals from other systems
Run the indicator in the background to confirm trend continuation conditions before taking signals from another strategy (e.g., breakouts or momentum entries).
Only act on long signals when the bias is bullish and a high-pressure pullback has recently occurred; similarly for short signals in bearish conditions.
Risk management and trend monitoring
Monitor when reversal pressure is building against your current position.
Use shifts in bias combined with high reversal pressure to re-evaluate or scale out of trend-following trades.
Recommended steps:
1. Choose your Trend Mode:
- RVWAP for smoother, regime-style trend detection.
- Market Structure for swing-based structural changes.
2. Adjust Trend Length and Pullback Length to match your timeframe (shorter for intraday, longer for swing/position trading).
3. Observe where histogram colors appear and how price reacts around the continuation line and highlighted bars.
4. Integrate these signals into a pre-defined trading plan with clear entry, exit, and risk rules.
8. Limitations and Disclaimer
* This tool is a **technical analysis aid**, not a complete trading system.
* Past behavior of trend continuation or reversal pressure does **not** guarantee future results.
* The embedded WR metric is a **descriptive statistic** based on recent historical conditions only; it is not a promise of performance or a robust statistical forecast.
* All parameters (lengths, thresholds, modes) are user-configurable and should be **tested and validated** on your own data, instruments, and timeframes before any live use.
Disclaimer
This indicator is provided for informational and educational purposes only and does not constitute financial, investment, or trading advice. Trading and investing in financial markets involve substantial risk, including the possible loss of all capital. You are solely responsible for your own trading decisions and for evaluating all information provided by this tool. OmegaTools and the author of this script expressly disclaim any liability for any direct or indirect loss resulting from the use of this indicator. Always consult with a qualified financial professional before making any investment decisions.
BTC -50% Crash to Recovery ZoneGeneral Overview This is a macro-analysis tool designed to visualize the true duration of Bitcoin’s "Suffering & Recovery Cycles." Unlike standard oscillators that only signal oversold conditions, this script highlights the entire timeline required for the market to flush out leverage and return to All-Time Highs (ATH).
Operational Logic The algorithm tracks Bitcoin’s historical All-Time High (ATH).
The Trigger: It activates automatically when the price drops 50% below the last recorded ATH.
The "Recovery Zone": Once triggered, the chart background turns red (indicating a "Drawdown" state). This zone remains active persistently, even during intermediate relief rallies.
The Reset: The zone deactivates only when the price breaks above the previous ATH, marking the official start of a new Price Discovery phase.
How to Read It
Red Background: We are officially in a Bear Market or Recovery Phase. The asset is technically "underwater." For the long-term investor with a low time preference, this visually defines the accumulation window.
Red Horizontal Line: Indicates the "Target." This is the exact price level of the old ATH that Bitcoin must reclaim to close the bearish cycle.
No Background Color: We are in Price Discovery. The market is healthy and pushing for new highs.
The Financial Lesson This indicator visually demonstrates a fundamental market truth: "Price takes the elevator down, but takes the stairs up." It shows that after a halving of value (-50%), Bitcoin may take months or years to recover previous levels, helping investors filter out the noise of short-term pumps that fail to break the macro-bearish structure.
RSI Driven ATR Trend [NeuraAlgo]
RSI Driven ATR Trend
Dynamic Trend Detection and Strength Analysis
Unlock the market’s hidden rhythm with the RSI Driven ATR Trend , a sophisticated tool designed to measure trend direction and strength using a combination of RSI momentum and ATR-based volatility . This indicator provides real-time insights into bullish and bearish phases, helping traders identify potential turning points and optimize entry and exit decisions.
1.Core In Logic:
Dynamically calculates trend levels based on RSI and ATR interactions.
Highlights trend direction with intuitive color coding: green for bullish, red for bearish.
Displays trend strength as a percentage to quantify momentum intensity.
Automatic visual cues for potential trend reversals with “Turn Up” and “Turn Down” labels.
Advanced smoothing and dynamic gating ensure responsive yet stable trend detection.
Compatible with all timeframes and instruments.
2.Inputs Explained:
Rsi Factor: Adjusts the sensitivity of the RSI in trend calculation. Higher values make the trend detection more responsive to momentum changes.
Multiplier: Multiplies the effect of Rsi Factor to fine-tune trend responsiveness.
Bar Back: Number of bars used for peak and dip calculations, determining how far back the indicator looks for trend changes.
Period: Lookback period used in trend gating and ATR calculations.
Source: Price source for calculations (default is close).
Main Colors: Customize bullish and bearish trend colors.
3.How it Works:
The indicator calculates RSI values and ATR-based dynamic ranges to determine upper and lower trend levels.
Trend direction is determined by price crossing above (bullish) or below (bearish) the dynamic trend line.
Trend strength is expressed as a percentage relative to the trend line, helping you assess momentum intensity.
Visual cues like "Turn Up" and "Turn Down" labels indicate potential trend reversals.
Bars are colored dynamically based on trend direction for quick interpretation.
Ideal for traders seeking a clear, actionable view of market trends without the clutter of multiple indicators. RSI Driven ATR Trend translates complex price behavior into an easy-to-read visual guide, helping you make smarter trading decisions.
Happy Trading!
I4I Inside Vortex Strike RateThis indicator identifies what I call an "Inside Vortex": It's similar to a Doji but more strict in having to be inside a keltner and also have a lower ATR than a blended average.
The bar itself is not that special. But it indicates that a potential big move might come in the next 2 periods.
After the patter: It then looks at what I call the Market Maker High and Low: A % of a blended ATR. It then looks back 100-200 or more bars and calculates the overall strike % in history for the High and low after the pattern happens.
This allows us to know how often these levels are hit within the next 2 periods to find if we have any edge on spread, call or put prices or use them as targets.
So its:
Pattern:
Levels
Strike Rate.
Very unique and EXTREME useful. Especially for options traders.
Volatility Signal-to-Noise Ratio🙏🏻 this is VSNR: the most effective and simple volatility regime detector & automatic volatility threshold scaler that somehow no1 ever talks about.
This is simply an inverse of the coefficient of variation of absolute returns, but properly constructed taking into account temporal information, and made online via recursive math with algocomplexity O(1) both in expanding and moving windows modes.
How do the available alternatives differ (while some’re just worse)?
Mainstream quant stat tests like Durbin-Watson, Dickey-Fuller etc: default implementations are ALL not time aware. They measure different kinds of regime, which is less (if at all) relevant for actual trading context. Mix of different math, high algocomplexity.
The closest one is MMI by financialhacker, but his approach is also not time aware, and has a higher algocomplexity anyways. Best alternative to mine, but pls modify it to use a time-weighted median.
Fractal dimension & its derivatives by John Ehlers: again not time aware, very low info gain, relies on bar sizes (high and lows), which don’t always exist unlike changes between datapoints. But it’s a geometric tool in essence, so this is fundamental. Let it watch your back if you already use it.
Hurst exponent: much higher algocomplexity, mix of parametric and non-parametric math inside. An invention, not a math entity. Again, not time aware. Also measures different kinds of regime.
How to set it up:
Given my other tools, I choose length so that it will match the amount of data that your trading method or study uses multiplied by ~ 4-5. E.g if you use some kind of bands to trade volatility and you calculate them over moving window 64, put VSNR on 256.
However it depends mathematically on many things, so for your methods you may instead need multipliers of 1 or ~ 16.
Additionally if you wanna use all data to estimate SNR, put 0 into length input.
How to use for regime detection:
First we define:
MR bias: mean reversion bias meaning volatility shorts would work better, fading levels would work better
Momo bias: momentum bias meaning volatility longs would work better, trading breakouts of levels would work better.
The study plots 3 horizontal thresholds for VSNR, just check its location:
Above upper level: significant Momo bias
Above 1 : Momo bias
Below 1 : MR bias
Below lower level: significant MR bias
Take a look at the screenshots, 2 completely different volatility regimes are spotted by VSNR, while an ADF does not show different regime:
^^ CBOT:ZN1!
^^ INDEX:BTCUSD
How to use as automatic volatility threshold scaler
Copy the code from the script, and use VSNR as a multiplier for your volatility threshold.
E.g you use a regression channel and fade/push upper and lower thresholds which are RMSEs multiples. Inside the code, multiply RMSE by VSNR, now you’re adaptive.
^^ The same logic as when MM bots widen spreads with vola goes wild.
How it works:
Returns follow Laplace distro -> logically abs returns follow exponential distro , cuz laplace = double exponential.
Exponential distro has a natural coefficient of variation = 1 -> signal to noise ratio defined as mean/stdev = 1 as well. The same can be said for Student t distro with parameter v = 4. So 1 is our main threshold.
We can add additional thresholds by discovering SNRs of Student t with v = 3 and v = 5 (+- 1 from baseline v = 4). These have lighter & heavier tails each favoring mean reversion or momentum more. I computed the SNR values you see in the code with mpmath python module, with precision 256 decimals, so you can trust it I put it on my momma.
Then I use exponential smoothing with properly defined alphas (one matches cumulative WMA and another minimizes error with WMA in moving window mode) to estimate SNR of abs returns.
…
Lightweight huh?
∞
Z-Score Regime DetectorThe Z-Score Regime Detector is a statistical market regime indicator that helps identify bullish and bearish market conditions based on normalized momentum of three core metrics:
- Price (Close)
- Volume
- Market Capitalization (via CRYPTOCAP:TOTAL)
Each metric is standardized using the Z-score over a user-defined period, allowing comparison of relative extremes across time. This removes raw value biases and reveals underlying momentum structure.
📊 How it Works
- Z-Score: Measures how far a current value deviates from its average in terms of standard deviations.
- A Bullish Regime is identified when both price and market cap Z-scores are above the volume Z-score.
- A Bearish Regime occurs when price and market cap Z-scores fall below volume Z-score.
Bias Signal:
- Bullish Bias = Price Z-score > Market Cap Z-score
- Bearish Bias = Market Cap Z-score > Price Z-score
This provides a statistically consistent framework to assess whether the market is flowing with strength or stress.
✅ Why This Might Be Effective
- Normalizing the data via Z-scores allows comparison of diverse metrics on a common scale.
- Using market cap offers broader insight than price alone, especially for crypto.
- Volume as a reference threshold helps identify accumulation/distribution regimes.
- Simple regime logic makes it suitable for trend confirmation, filtering, or position biasing in systems.
⚠️ Disclaimer
This script is for educational purposes only and should not be considered financial advice. Always perform your own research and risk management. Past performance is not indicative of future results. Use at your own discretion.
P/E, EPS, Price & Price-to-Sales DisplayPrice to earning ratio,
EPS,
Price ANd
Price-to-Sales Display
ATR multiple from High & LowA simple numerical indicator measuring ATR multiple from recent 252 days high and low.
ATR multiples from high (and low) are used as a base in many systematic trading and trend following systems. As an example many systems buy after a 2.5–4 ATR multiple pullback in a strong stock if the regime allows it. This would then be paired with an entry tactic, for example buy as it recaptures the a pivot within the upper range, a MA or breaks out again after this mid term pullback/shakeout.
This indicator uses a function which captures the recent high and low no matter if we have 252 bars or not, which is not how standard high/low works in Tradingview. This means it also works with recent IPO:s.
I prefer to overlay the indicator in one of the lower panes, for example the volume pane and then right click on the indicator and select Pin to scale > No scale (fullscreen).
Static K-means Clustering | InvestorUnknownStatic K-Means Clustering is a machine-learning-driven market regime classifier designed for traders who want a data-driven structure instead of subjective indicators or manually drawn zones.
This script performs offline (static) K-means training on your chosen historical window. Using four engineered features:
RSI (Momentum)
CCI (Price deviation / Mean reversion)
CMF (Money flow / Strength)
MACD Histogram (Trend acceleration)
It groups past market conditions into K distinct clusters (regimes). After training, every new bar is assigned to the nearest cluster via Euclidean distance in 4-dimensional standardized feature space.
This allows you to create models like:
Regime-based long/short filters
Volatility phase detectors
Trend vs. chop separation
Mean-reversion vs. breakout classification
Volume-enhanced money-flow regime shifts
Full machine-learning trading systems based solely on regimes
Note:
This script is not a universal ML strategy out of the box.
The user must engineer the feature set to match their trading style and target market.
K-means is a tool, not a ready made system, this script provides the framework.
Core Idea
K-means clustering takes raw, unlabeled market observations and attempts to discover structure by grouping similar bars together.
// STEP 1 — DATA POINTS ON A COORDINATE PLANE
// We start with raw, unlabeled data scattered in 2D space (x/y).
// At this point, nothing is grouped—these are just observations.
// K-means will try to discover structure by grouping nearby points.
//
// y ↑
// |
// 12 | •
// | •
// 10 | •
// | •
// 8 | • •
// |
// 6 | •
// |
// 4 | •
// |
// 2 |______________________________________________→ x
// 2 4 6 8 10 12 14
//
//
//
// STEP 2 — RANDOMLY PLACE INITIAL CENTROIDS
// The algorithm begins by placing K centroids at random positions.
// These centroids act as the temporary “representatives” of clusters.
// Their starting positions heavily influence the first assignment step.
//
// y ↑
// |
// 12 | •
// | •
// 10 | • C2 ×
// | •
// 8 | • •
// |
// 6 | C1 × •
// |
// 4 | •
// |
// 2 |______________________________________________→ x
// 2 4 6 8 10 12 14
//
//
//
// STEP 3 — ASSIGN POINTS TO NEAREST CENTROID
// Each point is compared to all centroids.
// Using simple Euclidean distance, each point joins the cluster
// of the centroid it is closest to.
// This creates a temporary grouping of the data.
//
// (Coloring concept shown using labels)
//
// - Points closer to C1 → Cluster 1
// - Points closer to C2 → Cluster 2
//
// y ↑
// |
// 12 | 2
// | 1
// 10 | 1 C2 ×
// | 2
// 8 | 1 2
// |
// 6 | C1 × 2
// |
// 4 | 1
// |
// 2 |______________________________________________→ x
// 2 4 6 8 10 12 14
//
// (1 = assigned to Cluster 1, 2 = assigned to Cluster 2)
// At this stage, clusters are formed purely by distance.
Your chosen historical window becomes the static training dataset , and after fitting, the centroids never change again.
This makes the model:
Predictable
Repeatable
Consistent across backtests
Fast for live use (no recalculation of centroids every bar)
Static Training Window
You select a period with:
Training Start
Training End
Only bars inside this range are used to fit the K-means model. This window defines:
the market regime examples
the statistical distributions (means/std) for each feature
how the centroids will be positioned post-trainin
Bars before training = fully transparent
Training bars = gray
Post-training bars = full colored regimes
Feature Engineering (4D Input Vector)
Every bar during training becomes a 4-dimensional point:
This combination balances: momentum, volatility, mean-reversion, trend acceleration giving the algorithm a richer "market fingerprint" per bar.
Standardization
To prevent any feature from dominating due to scale differences (e.g., CMF near zero vs CCI ±200), all features are standardized:
standardize(value, mean, std) =>
(value - mean) / std
Centroid Initialization
Centroids start at diverse coordinates using various curves:
linear
sinusoidal
sign-preserving quadratic
tanh compression
init_centroids() =>
// Spread centroids across using different shapes per feature
for c = 0 to k_clusters - 1
frac = k_clusters == 1 ? 0.0 : c / (k_clusters - 1.0) // 0 → 1
v = frac * 2 - 1 // -1 → +1
array.set(cent_rsi, c, v) // linear
array.set(cent_cci, c, math.sin(v)) // sinusoidal
array.set(cent_cmf, c, v * v * (v < 0 ? -1 : 1)) // quadratic sign-preserving
array.set(cent_mac, c, tanh(v)) // compressed
This makes initial cluster spread “random” even though true randomness is hardly achieved in pinescript.
K-Means Iterative Refinement
The algorithm repeats these steps:
(A) Assignment Step, Each bar is assigned to the nearest centroid via Euclidean distance in 4D:
distance = sqrt(dx² + dy² + dz² + dw²)
(B) Update Step, Centroids update to the mean of points assigned to them. This repeats iterations times (configurable).
LIVE REGIME CLASSIFICATION
After training, each new bar is:
Standardized using the training mean/std
Compared to all centroids
Assigned to the nearest cluster
Bar color updates based on cluster
No re-training occurs. This ensures:
No lookahead bias
Clean historical testing
Stable regimes over time
CLUSTER BEHAVIOR & TRADING LOGIC
Clusters (0, 1, 2, 3…) hold no inherent meaning. The user defines what each cluster does.
Example of custom actions:
Cluster 0 → Cash
Cluster 1 → Long
Cluster 2 → Short
Cluster 3+ → Cash (noise regime)
This flexibility means:
One trader might have cluster 0 as consolidation.
Another might repurpose it as a breakout-loading zone.
A third might ignore 3 clusters entirely.
Example on ETHUSD
Important Note:
Any change of parameters or chart timeframe or ticker can cause the “order” of clusters to change
The script does NOT assume any cluster equals any actionable bias, user decides.
PERFORMANCE METRICS & ROC TABLE
The indicator computes average 1-bar ROC for each cluster in:
Training set
Test (live) set
This helps measure:
Cluster profitability consistency
Regime forward predictability
Whether a regime is noise, trend, or reversion-biased
EQUITY SIMULATION & FEES
Designed for close-to-close realistic backtesting.
Position = cluster of previous bar
Fees applied only on regime switches. Meaning:
Staying long → no fee
Switching long→short → fee applied
Switching any→cash → fee applied
Fee input is percentage, but script already converts internally.
Disclaimers
⚠️ This indicator uses machine-learning but does not predict the future. It classifies similarity to past regimes, nothing more.
⚠️ Backtest results are not indicative of future performance.
⚠️ Clusters have no inherent “bullish” or “bearish” meaning. You must interpret them based on your testing and your own feature engineering.






















