
Winning “Brewery of the Year” Was Just Step One
Coveting the crown’s one thing. Turning it into an empire’s another. So Westbound & Down didn’t blink after winning Brewery of the Year at the 2025 Great American Beer Festival. They began their next phase. Already Colorado’s most-awarded brewery, distribution’s grown 2,800% since 2019, including a Whole Foods retail partnership. And after this latest title, they’ll quadruple distribution by 2028. Become an early-stage investor today.
This is a paid advertisement for Westbound & Down’s Regulation CF Offering. Please read the offering circular at https://invest.westboundanddown.com/
Elite Quant Plan – 14-Day Free Trial (This Week Only)
No card needed. Cancel anytime. Zero risk.
You get immediate access to:
Full code from every article (including today’s HMM notebook)
Private GitHub repos & templates
All premium deep dives (3–5 per month)
2 × 1-on-1 calls with me
One custom bot built/fixed for you
Try the entire Elite experience for 14 days — completely free.
→ Start your free trial now 👇
(Doors close in 7 days or when the post goes out of the spotlight — whichever comes first.)
See you on the inside.
👉 Upgrade Now →
🔔 Limited-Time Holiday Deal: 20% Off Our Complete 2026 Playbook! 🔔
Level up before the year ends!
AlgoEdge Insights: 30+ Python-Powered Trading Strategies – The Complete 2026 Playbook
30+ battle-tested algorithmic trading strategies from the AlgoEdge Insights newsletter – fully coded in Python, backtested, and ready to deploy. Your full arsenal for dominating 2026 markets.
Special Promo: Use code WINTER2025 for 20% off
Valid only until December 20, 2025 — act fast!
👇 Buy Now & Save 👇
Instant access to every strategy we've shared, plus exclusive extras.
— AlgoEdge Insights Team
Premium Members – Your Full Notebook Is Ready
The complete Google Colab notebook from today’s article (with live data, full Hidden Markov Model, interactive charts, statistics, and one-click CSV export) is waiting for you.
Preview of what you’ll get:

Inside this single-cell Google Colab code:
Automatic stock/ETF data download (from user-specified date → today) using yfinance
Local pivot detection (highs & lows) with adjustable lookback window (PIVOT_LEN)
Momentum filter using RSI (>50 = bullish conviction)
Range/conviction filter using ATR (candle body > ATR = strong move)
Logistic (sigmoid) scoring to assign a probability to each pivot based on combined signals
Strict filtering: only levels above PROB_THRESHOLD probability are kept
Distance & clustering rules: ignore levels too far (MAX_DISTANCE_ATR) or too close (MIN_LEVEL_DIFF)
Fully customizable parameters: ticker, timeframe, sensitivity, colors, etc.
Works on any ticker (stocks, ETFs, crypto like BTC-USD, etc.) and any timeframe (daily, hourly, weekly…) in one line change
Free readers – you already got the full breakdown and visuals in the article. Paid members – you get the actual tool.
Not upgraded yet? Fix that in 10 seconds here👇
Google Collab Notebook With Full Code Is Available In the End Of The Article Behind The Paywall 👇 (For Paid Subs Only)
The method presented here isolates only the most statistically significant price levels using a blend of pivot detection, momentum, and volatility.
Instead of plotting every high or low, we filter for zones where price action aligns with strong RSI and range expansion.
We then score the zones with a logistic function to get a probability estimate. This gives us price levels which have shown historical interest.
Each parameter in the process is adjustable, so you can fine-tune the sensitivity for any asset or timeframe as required.
The complete Python notebook for the analysis is provided below.

1. Logistic-Filtered Price Levels
Suppose you’re trading a stock in a volatile week. The price surges, then stalls, prints a sharp reversal, and the chart fills with local highs and lows.
Most trading tools would mark every single zigzag as significant, but this would leave you with a crowded and indecipherable chart.
What we actually want is a way to cut through this noise and focus only on the levels that saw real interest.
Our method extracts only the most relevant support and resistance levels by combining three key signals:
local pivots
momentum shifts
and range expansion
Step 1: Identify Local Pivots
The process begins by scanning the price series for local maxima (highs) and minima (lows) within a moving window.
For each bar, the algorithm checks whether it represents the highest high or lowest low over the lookback period.
This window is defined by a parameter, set at 14 bars, but can be adjusted as needed.
Mathematically:
A bar at index i is a pivot high if:

It is a pivot low if:

n is half the window size.
Pivot detection reduces the chart to only those price points where significant reversals / reactions occurred. These are the candidate levels.
Step 2: Contextualize Each Level with Momentum and Range
Not all pivots are important. A pivot formed during a period of weak momentum or low volatility is less likely to matter in the future.
To address this, the method attaches two context signals to every candidate:
Momentum: Measured using RSI, which quantifies the speed and change of price movements over the same window.

Range Expansion: Captured via ATR, which measures market volatility.

Each pivot is then labeled according to whether it (i) occurred during a period of above-average momentum (RSI > 50) and (ii) whether its candle body exceeded the ATR (a proxy for high conviction moves).
Step 3: Score the Significance with a Logistic Function
Now, the method translates these signals into a single probability score using the logistic (sigmoid) function:

Here, z is a sum of the binary signals:

each “bin” is either +1 or -1, based on whether the threshold is exceeded.
This composite score reflects how strongly price, momentum, and volatility align at each level.
The logistic function compresses the result to a value between 0 and 1, which can be interpreted as the “probability” of that level.
Step 4: Filter for Statistical Relevance
Only pivots with a logistic score above a certain threshold (e.g. 0.7) are retained.
This acts as the confidence filter, so that we can exclude levels where the signals do not align.
Step 5: Practical Filtering
To avoid clutter, further rules remove levels that:
Are too far from the current price (using a multiple of ATR),
Or are too close to an already accepted level (using a min. price gap).

Figure 1. Walkthrough of the logistic-filtered level detection: each step shows how pivots form, get scored, and only high-probability bands appear.
2. Key Price Levels in Python
2.1. User Parameter
This block sets up the environment and parameters:
INTERVAL: Smaller intervals (e.g. “1h”) produce more bars and finer swings (but more noise); larger intervals (e.g. “1wk”) smooth the data and show major trends.
PIVOT_LEN: A larger window (20–30) catches only major peaks/troughs; a smaller one (5–10) spots subtle turns but can clutter your chart.
PROB_THRESHOLD: Raising it above 0.8 shows only the most reliable levels; lowering it toward 0.5 adds more candidates with weaker backing.
MAX_DISTANCE_ATR: A higher multiple (15+) includes distant levels; a lower multiple (3–5) restricts focus to areas near the current price.
MIN_LEVEL_DIFF: Increasing this gap (e.g. 10) forces widely spaced levels; decreasing it (1–2) allows tighter clusters of nearby levels.
LOOKBACK_BARS: Set to an integer (like 100) to limit analysis to recent bars for speed;
Noneprocesses the full date range.LINE_CLR, PRICE_CLR, BG_CLR: Tweak these hex codes to match your UI or improve contrast in your plot.
import yfinance as yf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import dates as mdates
from datetime import datetime, timedelta
# ───────── USER SETTINGS
TICKER = "NVDA"
START_DATE = "2023-01-01"
END_DATE = (datetime.today() + timedelta(days=1)).strftime('%Y-%m-%d') # Today plus a One Day
INTERVAL = "1d" # 1d, 1wk, 1mo …
PIVOT_LEN = 14 # Lookback window for pivot detection. Increase: fewer, more significant levels. Decrease: more, but noisier levels.
PROB_THRESHOLD = 0.70 # Minimum logistic probability for a level to show. Increase: fewer, higher-confidence levels. Decrease: more, but weaker levels.
MAX_DISTANCE_ATR = 10 # Max distance (in ATR multiples) from last price to include a level. Increase: more distant levels. Decrease: focus near current price.
MIN_LEVEL_DIFF = 5.0 # Minimum allowed gap between levels (in price units). Increase: fewer, more widely spaced levels. Decrease: more clustered levels.
LOOKBACK_BARS = None # Set to int (e.g. 100) to plot only recent bars. Use None to show all data.
LINE_CLR = "#f23645" # color for levels
PRICE_CLR = "#d0d0d0" # price line color
BG_CLR = "#0d1117" # background color
VOLUME_SCALE = 0.5 # 0.5 cuts bar heights in half2.2. Fetching Data Function
The fetch function pulls OHLCV data from Yahoo Finance for the ticker and date range at various interval, e.g. daily, weekly, or hourly.
There are historical data limitations depending on the granularity of the data. For exampl, you can only get hourly data 2 years back or so.
def fetch(ticker, start, end, interval):
df = yf.download(
ticker, start=start, end=end,
interval=interval, auto_adjust=False,
group_by="column", progress=False
)
if df.empty:
raise ValueError("No data returned.")
if isinstance(df.columns, pd.MultiIndex):
df.columns = df.columns.get_level_values(0)
df.columns = df.columns.map(str.title)
return df.dropna(subset=["Open","High","Low","Close","Volume"])2.3. Computing ATR, RSI and Sigmoid
The rsi function computes a smoothed Relative Strength Index.
atr calculates a smoothed Average True Range from true range components.
The sigmoid lambda converts any combined score into a probability between 0 and 1.
def rsi(s, n):
d = s.diff()
up = d.clip(lower=0).ewm(alpha=1/n, min_periods=n).mean()
dn = -d.clip(upper=0).ewm(alpha=1/n, min_periods=n).mean()
rs = up / dn
return 100 - 100 / (1 + rs)
def atr(df, n):
h_l = df["High"] - df["Low"]
h_c = (df["High"] - df["Close"].shift()).abs()
l_c = (df["Low"] - df["Close"].shift()).abs()
tr = pd.concat([h_l, h_c, l_c], axis=1).max(axis=1)
return tr.ewm(alpha=1/n, min_periods=n).mean()
sigmoid = lambda z: 1 / (1 + np.exp(-z))2.4. Data Preparation
Here, we fetch and optionally trim the data, then compute RSI, candle body size, and ATR before binning them into momentum and range flags and defining is_high/is_low for pivot detection.
df = fetch(TICKER, START_DATE, END_DATE, INTERVAL)
if LOOKBACK_BARS:
df = df.tail(LOOKBACK_BARS)
df["RSI"] = rsi(df["Close"], PIVOT_LEN).fillna(50)
df["BodySize"] = (df["Close"] - df["Open"]).abs()
df["ATR"] = atr(df, PIVOT_LEN)
df["RSI_bin"] = np.where(df["RSI"] > 50, 1, -1)
df["Body_bin"] = np.where(df["BodySize"] > df["ATR"], 1, -1)
def is_high(i):
w = df["High"].iloc[i-PIVOT_LEN : i+PIVOT_LEN+1]
return df["High"].iat[i] == w.max()
def is_low(i):
w = df["Low"].iloc[i-PIVOT_LEN : i+PIVOT_LEN+1]
return df["Low"].iat[i] == w.min()2.5. Generate Levels
This block scans for pivots, scores each one, and keeps only those above the probability threshold.
It then filters out levels that are too far from the last price or too close to each other.
Finally, it sorts the remaining levels by date.
levels = []
for i in range(PIVOT_LEN, len(df) - PIVOT_LEN):
if is_high(i) or is_low(i):
score = 1 + df["RSI_bin"].iat[i] + df["Body_bin"].iat[i]
prob = sigmoid(score)
if prob >= PROB_THRESHOLD:
lvl = df["Low"].iat[i] if is_low(i) else df["High"].iat[i]
levels.append((i, lvl, prob))
last = df["Close"].iat[-1]
atr_now = df["ATR"].iat[-1]
levels = [
(i, lvl, prob)
for i, lvl, prob in levels
if abs(lvl - last) <= atr_now * MAX_DISTANCE_ATR
]
levels_sorted = sorted(levels, key=lambda x: x[2], reverse=True)
pruned = []
for i, l, p in levels_sorted:
if not any(abs(l - l2) < MIN_LEVEL_DIFF for _, l2, _ in pruned):
pruned.append((i, l, p))
levels = sorted(pruned, key=lambda x: x[0])2.6. Plot Results
Finally, we plot the results which include the key price level detected and profile for further context.
The band around the price levels marks one ATR above and below to show the expected volatility zone around that level.
plt.style.use("dark_background")
fig = plt.figure(figsize=(14, 7), facecolor=BG_CLR)
ax = fig.add_subplot(1, 1, 1)
ax.set_facecolor(BG_CLR)
# volume bars on left
ax2 = ax.twinx()
delta = df.index[1] - df.index[0]
w = delta * 0.8
raw_max = df["Volume"].max()
scaled = df["Volume"] * VOLUME_SCALE
cols = ["green" if c >= o else "red"
for c, o in zip(df["Close"], df["Open"])]
ax2.bar(df.index, scaled,
width=w, color=cols,
alpha=0.5, align="center", zorder=0)
ax2.set_ylim(0, raw_max * 1.1)
ax2.spines["right"].set_visible(False)
ax2.spines["left"].set_visible(True)
ax2.yaxis.set_label_position("left")
ax2.yaxis.set_ticks_position("left")
ax2.tick_params(axis="y", colors="#666666")
ax2.set_ylabel("Vol (scaled)", color="#666666")
# price line
ax.plot(df.index, df["Close"],
lw=1, color=PRICE_CLR, zorder=3)
# ATR bands, levels, markers
for idx, lvl, prob in levels:
top = lvl + atr_now
bot = lvl - atr_now
ax.fill_between(df.index, bot, top,
color=LINE_CLR, alpha=0.1, zorder=1)
ax.hlines(lvl, df.index[idx], df.index[-1],
colors=LINE_CLR, lw=2, zorder=2)
ax.text(df.index[idx] - delta*0.2, lvl,
f"{prob*100:.1f}% {lvl:.2f}",
color=LINE_CLR, fontsize=8,
ha="right", va="center", zorder=4)
ax.scatter(df.index[idx], df["Close"].iat[idx],
s=40, marker="o",
color=LINE_CLR, zorder=5)
# price-by-volume on right
ax3 = fig.add_axes([0.92, 0.1, 0.06, 0.8], sharey=ax)
bins = np.linspace(df["Low"].min(), df["High"].max(), 30)
vol_p, edges = np.histogram(df["Close"], bins=bins,
weights=df["Volume"])
centers = (edges[:-1] + edges[1:]) / 2
ax3.barh(centers, vol_p,
height=edges[1]-edges[0],
color="#666666", alpha=0.5)
ax3.invert_xaxis()
ax3.axis("off")
# styling
ax.spines["top"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.yaxis.tick_right()
ax.tick_params(axis="x", colors="#666666")
ax.tick_params(axis="y", colors="#666666")
ax.grid(False)
ax.set_title(f"{TICKER} • Key price levels above {PROB_THRESHOLD} probability from {START_DATE} to {END_DATE}", color="#ffffff", pad=10)
ax.xaxis.set_major_locator(mdates.AutoDateLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter("%Y-%m"))
# adjust margins instead of tight_layout
fig.subplots_adjust(left=0.05, right=0.90)
plt.show()
Figure 2. NVDA chart with statistically filtered support and resistance bands, each marked with its probability and ATR-based volatility zone.
3. Parameter Tuning Tips
Raise PIVOT_LEN or PROB_THRESHOLD for higher-confidence levels; lower them for more signals but more noise.
Increase MAX_DISTANCE_ATR to capture distant levels; decrease it to focus near the current price.
Widen MIN_LEVEL_DIFF to avoid crowded bands; narrow it if you want every possible zone.
4. Limitations and Practical Notes
This method finds key levels based only on historical price, momentum, and range. Future price action may still ignore them.
Results depend on clean data and well-tuned parameters; outliers or regime shifts can weaken level reliability.
Always test on unseen data we you want to implement it in live trading.
5. Customization and Extensions
Add more filters, such as moving averages, MACD, or volume spikes to refine level selection.
Overlay additional indicators directly on the chart for deeper context. Include volume-based features for further weight key levels.
Adapt the script to scan multiple tickers, run on intraday data, or trigger alerts when price nears a high-probability band.
Concuding Thoughts
Beyond visual clarity, these logistic-filtered levels can help automate stops, scaling, or signal confirmation.
Because every level adapts to market context, this approach stays strong even as volatility shifts.
Subscribe to our premium content to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade




