Architecture

The alpha 30/7 architecture integrates advanced techniques to optimize data processing and predictive performance. At its core, Variational Autoencoders (VAEs) compress high-dimensional data (93 features) into a compact latent space, effectively reducing noise while capturing essential structures within the data. The VAE architecture includes an encoder that maps input data to a latent space, generating a mean and log-variance, and a decoder that reconstructs the data, ensuring a meaningful and robust data representation for subsequent analysis.

Building on the VAE output, the architecture employs Bidirectional Long Short-Term Memory (LSTM) networks with 64 units to capture both past and future dependencies in sequential data, crucial for tasks involving time series or ordered data. To prevent overfitting, L2 regularization is applied, maintaining a balance between model complexity and generalization. An integrated attention mechanism enhances the LSTM’s capability by selectively focusing on the most predictive features derived from the VAE, ensuring the model prioritizes critical information. Finally, Dense layers with ReLU activations integrate and interpret these features, with the final output layer utilizing a sigmoid activation for precise binary classification, delivering refined and reliable predictive probabilities.

Last updated