Skip to content

Developed innovative optimization and ML algorithms to tackle data science tasks, including classification and sparse recovery, focusing on the NP-hard Maximum Feasible Subsystem problem.

Notifications You must be signed in to change notification settings

b-fakhar/My-PhD-Research-and-Publications

Repository files navigation

PhD Research

My doctoral research focused on developing novel optimization algorithms to tackle a range of data science tasks, including data classification, dimensionality reduction, and sparse recovery. A central challenge addressed was optimizing solutions for an NP-hard problem, the Maximum Feasible Subsystem (MAX FS) problem, particularly in scenarios involving dense constraint matrices. Through innovative algorithmic extensions to state-of-the-art MAX FS methods, I achieved significant improvements in solution speed while maintaining or enhancing solution quality. These advancements represent a crucial step forward in optimization techniques applicable across various AI domains, with the potential to revolutionize data analysis and pattern recognition. For instance, in binary classification tasks, the developed algorithm consistently outperforms traditional methods, delivering higher accuracy with minimal computational overhead. Similarly, in sparse recovery tasks within compressive sensing, my developed methods demonstrate substantial reductions in processing time without compromising critical sparsity levels, enabling accurate signal recovery from highly compressed data.

Publications

Optimization

Faster Maximum Feasible Subsystem solutions for dense constraint matrices

Machine Learning

Biological Data Classification via Faster MAXimum Feasible Subsystem Algorithm

  • In this work, I have developed a novel algorithm tailored for binary data classification. This algorithm presents an inventive approach for delineating between different classes, such as discerning ill patients from a normal population in medical diagnosis. Rigorous testing conducted on the UCI database, coupled with comparisons against four prevalent classification models (K-Nearest Neighbors, Support Vector Machines, Naive Bayes, and Logistic Regression) using 10-fold cross-validation, consistently reveals the algorithm's superior accuracy and promising performance in recall-oriented tasks.
Sparse Recovery:

Recovery of Noisy Compressively Sensed Speech via Regularized Maximum Feasible Subsystem Algorithm

  • In this paper, I developed an optimization algorithm for the recovery of noisy compressively sensed speech. The algorithm operates by iteratively identifying a support set of variables crucial for reconstructing the speech signal from compressed measurements. It begins by solving a Linear Programming (LP) problem to select candidate variables with the largest magnitude. Through an iterative process of updating objective function coefficients and selecting the winning variable, the algorithm efficiently identifies a small set of variables forming the support for the system of equations. To enhance robustness in the presence of noise, I further proposed a regularized version of the developed method, which incorporates an absolute error tolerance into the LP constraints. This regularization ensures greater resilience to noise during the recovery process, improving the algorithm's performance in real-world scenarios.

MAXimum Feasible Subsystem Recovery of Compressed ECG Signals

Multi-Stage Detection of Atrial Fibrillation in Compressively Sensed Electrocardiogram

Improved Recovery of Compressive Sensed Speech

Maximum feasible subsystem algorithms for recovery of compressively sensed speech

About

Developed innovative optimization and ML algorithms to tackle data science tasks, including classification and sparse recovery, focusing on the NP-hard Maximum Feasible Subsystem problem.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published