In partnership with

The Free Newsletter Fintech and Finance Execs Actually Read

If you work in fintech or finance, you already have too many tabs open and not enough time.

Fintech Takes is the free newsletter senior leaders actually read. Each week, we break down the trends, deals, and regulatory moves shaping the industry — and explain why they matter — in plain English.

No filler, no PR spin, and no “insights” you already saw on LinkedIn eight times this week. Just clear analysis and the occasional bad joke to make it go down easier.

Get context you can actually use. Subscribe free and see what’s coming before everyone else.

Elite Quant Plan – 14-Day Free Trial (This Week Only)

No card needed. Cancel anytime. Zero risk.

You get immediate access to:

  • Full code from every article (including today’s HMM notebook)

  • Private GitHub repos & templates

  • All premium deep dives (3–5 per month)

  • 2 × 1-on-1 calls with me

  • One custom bot built/fixed for you

Try the entire Elite experience for 14 days — completely free.

→ Start your free trial now 👇

(Doors close in 7 days or when the post goes out of the spotlight — whichever comes first.)

See you on the inside.

👉 Upgrade Now

🔔 Limited-Time Holiday Deal: 20% Off Our Complete 2026 Playbook! 🔔

Level up before the year ends!

AlgoEdge Insights: 30+ Python-Powered Trading Strategies – The Complete 2026 Playbook

30+ battle-tested algorithmic trading strategies from the AlgoEdge Insights newsletter – fully coded in Python, backtested, and ready to deploy. Your full arsenal for dominating 2026 markets.

Special Promo: Use code DECEMBER2025 for 20% off

Valid only until December 20, 2025 — act fast!

👇 Buy Now & Save 👇

Instant access to every strategy we've shared, plus exclusive extras.

— AlgoEdge Insights Team

Premium Members – Your Full Notebook Is Ready

The complete Google Colab notebook from today’s article (with live data, full Hidden Markov Model, interactive charts, statistics, and one-click CSV export) is waiting for you.

Preview of what you’ll get:

Inside:

  • What is YOLO and its value in technical analysis

  • Python implementation for image and video Inference

  • Limitations and concluding thoughts

  • Beautiful interactive Plotly charts

  • Regime duration & performance tables

  • Ready-to-use CSV export

  • Bonus: works on Bitcoin, SPX, or any ticker with one line change

Free readers – you already got the full breakdown and visuals in the article. Paid members – you get the actual tool.

Not upgraded yet? Fix that in 10 seconds here👇

Google Collab Notebook With Full Code Is Available In the End Of The Article Behind The Paywall 👇 (For Paid Subs Only)

Can Deep Learning tech flag those “Head and Shoulders” or “W-Bottom” candlestick setups for you, in real time?

Let’s explore this. We’ll assess the effectiveness of using a fine-tuned YOLO v8 model to identify candlestick patterns.

YOLO is an object detection algorithm known for its speed and accuracy. It’s used in most modern computer vision applications.

In this article, we’ll show how to implement a fine-tuned YOLOv8 model specifically designed for recognizing stock price patterns.

This model has been trained on a ~9000 image dataset of candlestick charts with an overall training accuracy of 0.93.

The model is capable of identifying popular patterns like ‘Head and Shoulders’, ‘Triangle’ and ‘W-Bottom.’

Smarter news. Fewer yawns

Business news takes itself way too seriously.

Morning Brew doesn’t.

Morning Brew delivers a smart, skimmable email newsletter on the day’s must-know business news — plus games that make sticking around a little more fun. Think crosswords, quizzes, and quick breaks that turn staying informed into something you actually look forward to.

Join over 4 million professionals reading Morning Brew for free. And walk away knowing more than you did five minutes ago.

What is different about the approach we present here is that we can analyze both static images and live trading video data. See below.

This article is structured as follows:

  • What is YOLO and its value in technical analysis

  • Python implementation for image and video Inference

  • Limitations and concluding thoughts

1. What is YOLO?

YOLO, “You Only Look Once,” is a state-of-the-art object detection algorithm which has proved to be very efficient in making predictions using a single evaluation.

The Core Idea of YOLO

YOLO frames object detection as a single regression problem, straight from image pixels to bounding box coordinates and class probabilities.

This is fundamentally different from the old methods, where multiple stages were required to do this.

There are many articles discussing YOLO in depth. Here, we’ll cover only the basics related to the solution we discuss.

How YOLO Works

YOLO divides the input image into an 𝑆×𝑆 grid. Each cell in the grid predicts a fixed number of bounding boxes and confidence scores for those bounding boxes.

The confidence score reflects how accurate the bounding box is and if the bounding box contains a known object

For each bounding box, YOLO predicts:

  1. Coordinates (x, y): The center of the bounding box relative to the grid cell.

  2. Width (w) and Height (h): The dimensions of the bounding box, normalized by the image width and height.

  3. Confidence Score: The probability that a box contains an object and how accurate the box is that it contains the object.

  4. Class Probabilities: Conditional class probabilities given that an object is present in the box.

Each grid cell thus predicts 𝐵 bounding boxes and class probabilities for each class.

If the image contains 𝐶 classes, each prediction for a grid cell involves:

5 represents the 𝑥,𝑦,𝑤,ℎ coordinates and the confidence score.

Ready to Plan Your Retirement?

Knowing when to retire starts with understanding your goals. When to Retire: A Quick and Easy Planning Guide can help you define your objectives, how long you’ll need your money to last and your financial needs. If you have $1 million or more, download it now.

Intersection over Union

IoU is an important measure in YOLO that indicates the goodness of fit of the predicted bounding boxes. It is defined as:

This value ranges from 0 to 1. 1 indicates perfect overlap. During training, YOLO maximizes the IoU between the predicted boxes and the ground truth boxes.

Loss Function

YOLO’s loss function consists of three main components:

Localization Loss: Measures errors in the predicted bounding box coordinates. It is computed using the sum of squared errors.

Confidence Loss: Measures the accuracy of the confidence score prediction, using the sum of squared errors for the presence and absence of objects.

Class Probability Loss: Measures errors in the predicted class probabilities.

Where:

  • λcoord​ and 𝜆noobj are weights to balance the loss terms.

  • 1𝑖𝑗obj​ is an indicator function that denotes if the 𝑗-th bounding box in cell 𝑖 contains an object.

Advantages of YOLO

  1. Speed: YOLO operates real-time imagery. It can run very high frame rates, which is suitable for real-time detection applications.

  2. Accuracy: The unified architecture means it has fewer background errors compared to other methods that look at RoI.

  3. Simplicity: YOLO simplifies the pipeline by framing object detection as a single regression problem.

Evolution from YOLOv1 to YOLOv9

  • YOLOv1: Introduced the concept of single-stage detection.

  • YOLOv2: Improved bounding box prediction and introduced anchor boxes.

  • YOLOv3: Enhanced with deeper networks and feature pyramids for multi-scale detection.

  • YOLOv4: Included various improvements like CSPDarknet backbone and PANet.

  • YOLOv5: Introduced more efficient and flexible implementations.

  • YOLOv6 to YOLOv8: Continued refinements for speed, accuracy, ease of use, handling of small objects, and training processes.

  • YOLOv9: Enhances accuracy and speed. New versions are equipped with advanced mechanisms of attention through transformers and better feature pyramids.

Fine-Tuning YOLO

Fine-tuning YOLO involves adapting the pre-trained model to a specific dataset. For example, to enhance its ability to recognize custom patterns like candlestick patterns in stock charts.

To fine-tune YOLO for candlestick pattern recognition, follow these steps:

  1. Prepare the Dataset: Gather and annotate a dataset of candlestick chart images. Ensure the dataset is diverse to cover various patterns and market conditions. Avoid look ahead bias.

  2. Modify the Model: Adjust the model’s architecture and parameters to better suit the new task. This includes setting the number of classes and tweaking hyperparameters.

  3. Train the Model: Use the annotated dataset to train the model. The model’s weights are adjusted to minimize detection loss.

  4. Evaluate and Fine-Tune: Evaluate the model’s performance on a validation set.

2. Python Implementation

2.1 Setting Up the Environment

We will use the ultralytics library, which provides A simple implementations of the YOLO algorithm.

We also need OpenCV for image and video processing, and requests for handling HTTP requests.

# Install PyTorch, an open-source machine learning library
!pip install torch

# Install the specific version of ultralytics library for YOLO model
!pip install ultralytics==8.0.43

# Install the ultralyticsplus library for more utilities
!pip install ultralyticsplus==0.0.28

# Install OpenCV, a library for computer vision tasks
!pip install opencv_python

# Install requests, a library for making HTTP requests
!pip install requests>=2.23.0

# Install the headless version of OpenCV for environments without display capabilities
!pip install opencv-python-headless

We then need to import the necessary libraries and functions:

import cv2
import requests
import os
from ultralyticsplus import YOLO, render_result
from google.colab.patches import cv2_imshow
from IPython.display import HTML, display
from base64 import b64encode
import gdown
import cv2
from google.colab.patches import cv2_imshow

2.2 Loading the Fine-Tuned YOLO Model

We will load the fine-tuned YOLO v8 for candlestick pattern recognition.

The fine-tuned model provided by ‘foduucom’ is specifically trained on ~9000 images to recognize different stock market patterns.

The model was validated on ~800 images. Load the model with the code below:

model = YOLO('foduucom/stockmarket-pattern-detection-yolov8')

2.3 Performing Image Inference

Key Parameters:

  • image_path: Path of input image where inference needs to be performed.

  • model_path: Path of the YOLO model that is fine-tuned for specific tasks.

  • image_size: The size to which the image will be resized before being fed into the model.

  • conf_threshold: The confidence threshold for predictions. Detections with a confidence score below this value will be discarded.

  • iou_threshold: The IoU threshold during NMS. This determines how much overlap is allowed between bounding boxes. Bounding boxes with IoU above this threshold are suppressed to reduce duplicate detections.

  • agnostic_nms: if class-agnostic NMS is performed. In this case, set to False. It implyies that NMS will be performed based on the class of each bounding box.

  • max_det: The maximum number of detections allowed per image.

# Function for Image Inference
def yolov8_img_inference(image_path, model_path, image_size=640, conf_threshold=0.25, iou_threshold=0.45):

    # Load the YOLO model from the specified path
    model = YOLO(model_path)

    # Set the model's confidence threshold for predictions
    model.overrides['conf'] = conf_threshold

    # Set the Intersection over Union threshold 
    model.overrides['iou'] = iou_threshold

    # Non-Maximum Suppression will not be class-agnostic
    model.overrides['agnostic_nms'] = False

    # Set the maximum number of detections per image
    model.overrides['max_det'] = 1000

    # Read the input image using OpenCV
    image = cv2.imread(image_path)

    # Perform inference on the image
    results = model.predict(image)

    # Render the results on the image
    render = render_result(model=model, image=image, result=results[0])

    return render

Download Sample Image

Download a sample Image or upload your own:

# Google Drive file ID
file_id = "1W4FdGg43YPCjKoZsQPIpHPZ7B9RpoiQm"

# Construct the direct download URL
image_url = f"https://drive.google.com/uc?id={file_id}"
output_path = "example_image.png"

# Download the image
gdown.download(image_url, output_path, quiet=False)


# Read and display the image
image = cv2.imread(output_path)
if image is None:
    print("Failed to load the image. Check the file format and path.")
else:
    cv2_imshow(image)

Inference

Run the inference on the image specified:

# Example usage for image inference
image_path = 'download.png'  # Replace with your image path
model_path = 'foduucom/stockmarket-pattern-detection-yolov8'
rendered_image = yolov8_img_inference(image_path, model_path)
rendered_image

2.4 Performing Video Inference

Download Video Example

We download a sample video hosted on google drive for inference. You can also use your own.

# Google Drive file ID
file_id = "1LgaKNgXhv1rdqH08j9vxeDbcMb2YDCZ8"

# Construct the direct download URL
image_url = f"https://drive.google.com/uc?id={file_id}"
output_path = "video_input.mp4"

# Download the image
gdown.download(image_url, output_path, quiet=False)

Function For Video

The function show_preds_video processes video frames with the YOLO model annotates each frame where patterns are detected.

It also allows for frame skipping to process every nth frame and improve efficiency.

  1. Frame-by-Frame Analysis: The function reads and processes each frame of the video individually.

  2. Frame Skipping: The frame_skip parameter allows you to process every nth frame. For instance, if frame_skip is set to 5, the function will process every fifth frame.

  3. Real-Time Detection: The function performs inference on each selected frame using the YOLO model, annotating the frame with detected patterns.

from IPython.display import HTML, display
from base64 import b64encode

def show_preds_video(video_path, model_path, output_video_path='output_video.mp4', frame_skip=5, image_size=640, conf_threshold=0.25, iou_threshold=0.45):
    """
    Process video frames with YOLO model and annotate the frames.
    """
    cap = cv2.VideoCapture(video_path)  # Open the video file
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')  # Define the codec for the output video
    out = cv2.VideoWriter(output_video_path, fourcc, 20.0, (int(cap.get(3)), int(cap.get(4))))  # Create a VideoWriter object to save the output video

    frame_count = 0  # Initialize frame count

    while cap.isOpened():  # Loop through frames
        success, frame = cap.read()  # Read a frame from the video

        if success:
            if frame_count % frame_skip == 0:  # Process every nth frame, as specified by frame_skip
                model = YOLO(model_path)  # Load the YOLO model
                model.overrides['conf'] = conf_threshold  # Set the confidence threshold for predictions
                model.overrides['iou'] = iou_threshold  # Set the IoU threshold for NMS
                model.overrides['agnostic_nms'] = False  # NMS class-agnostic setting
                model.overrides['max_det'] = 1000  # Maximum number of detections per frame

                results = model.predict(frame)  # Perform inference on the frame
                annotated_frame = results[0].plot()  # Annotate the frame with detection results

                out.write(annotated_frame)  # Write the annotated frame to the output video
            else:
                out.write(frame)  # Write the original frame to the output video if not processed

            frame_count += 1  # Increment the frame count
        else:
            break  # Break the loop if no frame is read (end of video)

    cap.release()  # Release the video capture object
    out.release()  # Release the video writer object
    cv2.destroyAllWindows()  # Close all OpenCV windows

    # Return the path to the saved annotated video
    return output_video_path

def display_video(video_path):
    """
    Display the video in the notebook.
    """
    display(HTML(f"""
    <video width="600" controls>
        <source src="{video_path}" type="video/mp4">
    </video>
    <a href="{video_path}" download>Download Video</a>
    """))

# Execution
video_path = '/content/video_input.mp4'  # Replace with video path
model_path = 'foduucom/stockmarket-pattern-detection-yolov8'
output_video_path = show_preds_video(video_path, model_path)
display_video(output_video_path)

3. Limitations

There are natural limitations to be aware of.

  • Data Diversity: For this implementation we do not have access to the training data, however, motivated technical traders would be encouraged to annotate their own dataset and train their own YOLO model without look-ahead bias.

  • Biases in Model Predictions: YOLO predictions will reflect biases according to the data the model was trained on. This could be minimized through constant retraining using updated data.

  • Error Handling: No model can escape false positives and false negatives. The model predictions must be validated with other forms of analysis.

Concluding Thoughts

Using YOLO to recognize Candlestick pattern is an innovative way to enhance technical analysis. It could allow traders to make decisions quicker and in real-time.

However, in order to see the the potential real effectiveness of YOLO, one would have to fine-tune a full YOLO model on more candlestick patterns and on data that minimizes biases.

logo

Subscribe to our premium content to read the rest.

Become a paying subscriber to get access to this post and other subscriber-only content.

Upgrade

Keep Reading