
The Year-End Moves No One’s Watching
Markets don’t wait — and year-end waits even less.
In the final stretch, money rotates, funds window-dress, tax-loss selling meets bottom-fishing, and “Santa Rally” chatter turns into real tape. Most people notice after the move.
Elite Trade Club is your morning shortcut: a curated selection of the setups that still matter this year — the headlines that move stocks, catalysts on deck, and where smart money is positioning before New Year’s. One read. Five minutes. Actionable clarity.
If you want to start 2026 from a stronger spot, finish 2025 prepared. Join 200K+ traders who open our premarket briefing, place their plan, and let the open come to them.
By joining, you’ll receive Elite Trade Club emails and select partner insights. See Privacy Policy.
Elite Quant Plan – 14-Day Free Trial (This Week Only)
No card needed. Cancel anytime. Zero risk.
You get immediate access to:
Full code from every article (including today’s HMM notebook)
Private GitHub repos & templates
All premium deep dives (3–5 per month)
2 × 1-on-1 calls with me
One custom bot built/fixed for you
Try the entire Elite experience for 14 days — completely free.
→ Start your free trial now 👇
(Doors close in 7 days or when the post goes out of the spotlight — whichever comes first.)
See you on the inside.
👉 Upgrade Now →
🔔 Limited-Time Holiday Deal: 20% Off Our Complete 2026 Playbook! 🔔
Level up before the year ends!
AlgoEdge Insights: 30+ Python-Powered Trading Strategies – The Complete 2026 Playbook
30+ battle-tested algorithmic trading strategies from the AlgoEdge Insights newsletter – fully coded in Python, backtested, and ready to deploy. Your full arsenal for dominating 2026 markets.
Special Promo: Use code DECEMBER2025 for 20% off
Valid only until December 20, 2025 — act fast!
👇 Buy Now & Save 👇
Instant access to every strategy we've shared, plus exclusive extras.
— AlgoEdge Insights Team
Premium Members – Your Full Notebook Is Ready
The complete Google Colab notebook from today’s article (with live data, full Hidden Markov Model, interactive charts, statistics, and one-click CSV export) is waiting for you.
Preview of what you’ll get:

Inside:
Automatic AAPL stock data download (from 2015 → today, December 16, 2025)
Feature engineering: daily returns, 7/21-day moving averages, 21-day volatility, and 14-day RSI
Sequence-to-sequence setup: 30 days of multi-feature history → predict next 7 closing prices
Two classic recurrent models: single-layer LSTM (64 units) and GRU (64 units) with dropout
Full training on 80% data (100 epochs possible, default 50 for speed) with Adam optimizer and MSE loss
Comprehensive evaluation: MSE, MAE, RMSE, MAPE (%), and R² metrics in a clean table
Beautiful matplotlib visualizations: training histories + random test samples with past input, actual vs. predicted 7-day prices
Ready-to-use inverse scaling for real-dollar predictions
Bonus: works on any ticker (e.g., Bitcoin "BTC-USD", SPX "^GSPC") with one line change: replace "AAPL" in the download call
Free readers – you already got the full breakdown and visuals in the article. Paid members – you get the actual tool.
Not upgraded yet? Fix that in 10 seconds here👇
Google Collab Notebook With Full Code Is Available In the End Of The Article Behind The Paywall 👇 (For Paid Subs Only)

Comparison of LSTM and GRU model predictions against actual Apple stock closing prices over 7 days, following 30 days of historical data as input.
Stock prices move fast, but they rarely move without patterns.
If you could learn those patterns from history and forecast the next seven days, how useful would that be?
This article is a hands-on experiment using real historical data from Apple (AAPL) to test two of the most widely used deep learning models for time series forecasting — LSTM and GRU.
The goal is to see how well they can predict future closing prices based on past trends.
What investment is rudimentary for billionaires but ‘revolutionary’ for 70,571+ investors entering 2026?
Imagine this. You open your phone to an alert. It says, “you spent $236,000,000 more this month than you did last month.”
If you were the top bidder at Sotheby’s fall auctions, it could be reality.
Sounds crazy, right? But when the ultra-wealthy spend staggering amounts on blue-chip art, it’s not just for decoration.
The scarcity of these treasured artworks has helped drive their prices, in exceptional cases, to thin-air heights, without moving in lockstep with other asset classes.
The contemporary and post war segments have even outpaced the S&P 500 overall since 1995.*
Now, over 70,000 people have invested $1.2 billion+ across 500 iconic artworks featuring Banksy, Basquiat, Picasso, and more.
How? You don’t need Medici money to invest in multimillion dollar artworks with Masterworks.
Thousands of members have gotten annualized net returns like 14.6%, 17.6%, and 17.8% from 26 sales to date.
*Based on Masterworks data. Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd
You’ll see:
How to collect and engineer features from raw stock data
How to structure the problem for sequence prediction
How each model is built, trained, and evaluated
A side-by-side comparison of results using real metrics and charts
Setup
We begin by installing and importing the necessary libraries:
!pip install yfinance scikit-learn matplotlib pandas numpy tensorflow tabulateimport yfinance as yf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from tabulate import tabulate
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, GRU, Dense, Dropout
import random
import os
# Ensure figure directory exists
os.makedirs("figures", exist_ok=True)
plt.style.use("dark_background")Data Collection and Feature Engineering
We’ll download historical stock data for Apple (AAPL) and engineer features commonly used in financial modeling.
# Download historical stock data
df = yf.download("AAPL", start="2015-01-01")
df.columns = df.columns.get_level_values(0)
df = df[['Close']]
# View the raw data
df.head()We’ll then compute returns, moving averages, rolling standard deviation, and the Relative Strength Index (RSI). This is where you can engineer and include more features for more accurate predictions.
# Compute daily return
df["Return"] = df["Close"].pct_change()
# Moving averages and rolling statistics
df["MA7"] = df["Close"].rolling(window=7).mean()
df["MA21"] = df["Close"].rolling(window=21).mean()
df["STD21"] = df["Close"].rolling(window=21).std()
# RSI computation
delta = df["Close"].diff()
gain = np.where(delta > 0, delta, 0)
loss = np.where(delta < 0, -delta, 0)
gain = pd.Series(gain, index=df.index).rolling(window=14).mean()
loss = pd.Series(loss, index=df.index).rolling(window=14).mean()
rs = gain / loss
df["RSI"] = 100 - (100 / (1 + rs))
# Drop missing values
df.dropna(inplace=True)
# View the final dataframe
df.tail()These features provide the model with more contextual information beyond just the price.
Sequence Generation
To train a deep learning model on time series, we convert the data into sequences of past values and their future targets.
def create_sequences(data, n_past=30, n_future=7):
X, y = [], []
for i in range(n_past, len(data) - n_future):
X.append(data[i - n_past:i])
y.append(data[i:i + n_future, 0]) # Predicting closing prices
return np.array(X), np.array(y)
features = ["Close", "Return", "MA7", "MA21", "STD21", "RSI"]
scaler = MinMaxScaler()
scaled_data = scaler.fit_transform(df[features])
X, y = create_sequences(scaled_data, n_past=30, n_future=7)
split = int(0.8 * len(X))
X_train, y_train = X[:split], y[:split]
X_test, y_test = X[split:], y[split:]Each input sequence spans 30 past days, and the model predicts the next 7 days of closing prices.
Visualizing Input and Target Sequences
Before training, we inspect a few training examples.
# Rescale the closing price back to original scale
close_index = features.index("Close")
def inverse_transform_close(data):
dummy = np.zeros((len(data), len(features)))
dummy[:, close_index] = data
return scaler.inverse_transform(dummy)[:, close_index]
# Plot a few examples
num_examples = 3
plt.figure(figsize=(15, 4 * num_examples))
for i in range(num_examples):
past = inverse_transform_close(X_train[i][:, close_index])
future = inverse_transform_close(y_train[i])
plt.subplot(num_examples, 1, i + 1)
plt.plot(range(len(past)), past, label="Past 30 days", color="blue")
plt.plot(range(len(past), len(past) + len(future)), future, label="Next 7 days", color="orange")
plt.axvline(x=len(past) - 1, color="gray", linestyle="--")
plt.title(f"Training Sample {i+1}")
plt.xlabel("Days")
plt.ylabel("Closing Price")
plt.legend()
plt.tight_layout()
plt.savefig("figures/input_target_samples.png")
plt.show()
Sequence to Sequence Training Examples Chart
Model Building and Training
We train two separate models: one with an LSTM layer, and another with a GRU layer.
LSTM Model
def build_lstm(input_shape, output_steps):
model = Sequential([
LSTM(64, return_sequences=False, input_shape=input_shape),
Dropout(0.2),
Dense(output_steps)
])
model.compile(optimizer="adam", loss="mse")
return model
lstm_model = build_lstm(X_train.shape[1:], y_train.shape[1])
lstm_history = lstm_model.fit(
X_train, y_train,
validation_split=0.2,
epochs=100,
batch_size=32,
verbose=1
)GRU Model
def build_gru(input_shape, output_steps):
model = Sequential([
GRU(64, return_sequences=False, input_shape=input_shape),
Dropout(0.2),
Dense(output_steps)
])
model.compile(optimizer="adam", loss="mse")
return model
gru_model = build_gru(X_train.shape[1:], y_train.shape[1])
gru_history = gru_model.fit(
X_train, y_train,
validation_split=0.2,
epochs=100,
batch_size=32,
verbose=1
)Training Performance
def plot_history(history, model_name):
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label="Train Loss", color="blue")
plt.plot(history.history['val_loss'], label="Val Loss", color="orange")
plt.title(f"{model_name} Training History")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.savefig(f"figures/{model_name}_training_history.png")
plt.show()LSTM Training History
plot_history(lstm_history, "LSTM")
LSTM Training History
GRU Training History
plot_history(gru_history, "GRU")
GRU Training History
These plots show how the training and validation loss evolved over time.
Model Evaluation
We generate predictions and evaluate the models on unseen data.
y_pred_lstm = lstm_model.predict(X_test)
y_pred_gru = gru_model.predict(X_test)
y_pred_lstm_inv = np.array([inverse_transform_close(seq) for seq in y_pred_lstm])
y_pred_gru_inv = np.array([inverse_transform_close(seq) for seq in y_pred_gru])
y_test_inv = np.array([inverse_transform_close(seq) for seq in y_test])Define evaluation metrics:
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def evaluate_predictions(y_true, y_pred):
y_true_flat = y_true.flatten()
y_pred_flat = y_pred.flatten()
mse = mean_squared_error(y_true_flat, y_pred_flat)
mae = mean_absolute_error(y_true_flat, y_pred_flat)
rmse = np.sqrt(mse)
mape = mean_absolute_percentage_error(y_true_flat, y_pred_flat)
r2 = r2_score(y_true_flat, y_pred_flat)
return mse, mae, rmse, mape, r2
metrics_lstm = evaluate_predictions(y_test_inv, y_pred_lstm_inv)
metrics_gru = evaluate_predictions(y_test_inv, y_pred_gru_inv)
headers = ["Model", "MSE", "MAE", "RMSE", "MAPE (%)", "R²"]
results = [
["LSTM"] + [round(m, 5) for m in metrics_lstm],
["GRU"] + [round(m, 5) for m in metrics_gru]
]
print(tabulate(results, headers=headers, tablefmt="pretty"))+-------+-----------+----------+----------+----------+---------+
| Model | MSE | MAE | RMSE | MAPE (%) | R² |
+-------+-----------+----------+----------+----------+---------+
| LSTM | 202.51944 | 11.83017 | 14.23093 | 5.64143 | 0.66759 |
| GRU | 52.45199 | 5.277 | 7.24237 | 2.59935 | 0.91391 |
+-------+-----------+----------+----------+----------+---------+Visualizing Random Predictions
Let’s visualize how both models perform on randomly selected samples from the test set.
num_examples = 3
plt.figure(figsize=(15, 4 * num_examples))
random_indices = random.sample(range(len(y_test)), num_examples) # Pick random unique indices
for i, idx in enumerate(random_indices):
past = inverse_transform_close(X_test[idx][:, close_index]) # Add this line to plot past input
true = inverse_transform_close(y_test[idx])
pred_lstm = inverse_transform_close(y_pred_lstm[idx])
pred_gru = inverse_transform_close(y_pred_gru[idx])
plt.subplot(num_examples, 1, i + 1)
# Plot input sequence (past 30 days)
plt.plot(range(len(past)), past, label="Past 30 days (Input)", color="white")
# Plot predictions and ground truth (next 7 days)
plt.plot(range(len(past), len(past) + len(true)), true, label="Ground Truth", color="white")
plt.plot(range(len(past), len(past) + len(pred_lstm)), pred_lstm, label="LSTM Prediction", linestyle="--", color="blue")
plt.plot(range(len(past), len(past) + len(pred_gru)), pred_gru, label="GRU Prediction", linestyle="--", color="orange")
plt.axvline(x=len(past) - 1, color="gray", linestyle="--")
plt.title(f"Test Sample {idx}")
plt.xlabel("Days")
plt.ylabel("Closing Price")
plt.legend()
plt.tight_layout()
plt.savefig("figures/predictions_vs_ground_truth.png")
plt.show()
Models Predictions with Ground Truth Data
These visualizations help us understand how each model tracks actual market movement over a one-week horizon.
The GRU model outperformed the LSTM across all metrics, with lower error rates and stronger predictive accuracy.
Its ability to capture short-term patterns in Apple’s stock price was more consistent and precise.
This shows how small architectural changes in deep learning can lead to meaningful improvements.
You can build on this by testing other stocks, adding new features, or incorporating market signals like news or volume.
This notebook offers a simple yet powerful starting point for stock price forecasting with deep learning.
Subscribe to our premium content to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade





