This project implements an advanced computer vision system designed to recognize American Sign Language (ASL) alphabet gestures in real-time. Using deep learning and computer vision techniques, the system can accurately identify and classify hand gestures corresponding to the 26 letters of the alphabet. Key Features: • Real-time hand detection and tracking using YOLO • Convolutional Neural Network (CNN) for gesture classification • High accuracy recognition with 95%+ precision • Support for webcam and video file input • Interactive training mode for model improvement • Data augmentation pipeline for robust training • Cross-platform compatibility The project aims to bridge communication gaps and provide an accessible tool for learning and practicing ASL alphabet signs.