Dissertation Defense Announcement for Abdelmoula El Yazizi – 10/03/2025 at 9 AM

September 11, 2025

Dissertation Title: Using Quantum Annealing for Sampling and Pattern Generation in Generative Machine Learning and Catastrophic Forgetting Mitigation

When: Friday, October 3, 2025, at 9:00 AM (CDT)

Where: Simrall 228 or Online [Link]

Candidate: Abdelmoula El Yazizi

Degree: Doctor of Philosophy in Electrical & Computer Engineering

Committee Members: Dr. Yaroslav Koshka, Dr. Samee U. Khan, Dr. Mark A. Novotny, Dr. John Ball

Abstract:

The first goal of this dissertation was to understand the reasons for the absence in previous investigations of significant and consistent improvements in the trainability of Restricted Boltzmann Machines (RBM) when the Quantum Annealer (QA) was used for sampling from the RBM probability distribution. The second goal was to address the shortcomings of those previous investigations, explore possibilities of improving RBM training, and identify other machine learning applications that could benefit from sampling or from generating patterns by the QA.

The first part of this dissertation focused on a Local-Valley (LV) centered approach to assessing the quality of sampling. QA-based and Gibbs samples were compared based on the number of the LVs to which they belonged and the energy of the corresponding local minima. Many of the LVs found by the two techniques differed. However, for higher-probability sampled states, the two techniques were (unfavorably) less complementary and more overlapping. The limited complementarity of the QA-based sampling explained the failure of many previous investigations to achieve substantial (or even any) improvements. However, the results also revealed some potential for improvement, e.g., by combining the QA-based and the classical sampling techniques to possibly include samples that would be missed by any of the two methods alone.

In the second part of the dissertation, a novel hybrid sampling method was developed, combining the classical and the QA contributions. LVs found from QA solutions were combined with a subset of training patterns to initiate the Markov chain during the RBM learning. No improvements in the RBM training have been achieved in this part of the work, supporting the hypothesis that the differences between the QA-based and MCMC sampling are insufficient to benefit the training.

In the third part of the dissertation, the feasibility of using QA-generated patterns for generative replay-based mitigation of Catastrophic Forgetting (CF) during incremental learning has been demonstrated for the first time. Both the speed of generating a large number of distinct patterns, including those from the lower probability parts of the distribution, and the potential for further improvement make this approach promising for various challenging machine learning applications.