PyPop7: A Pure-Python Library for POPulation-based Black-Box Optimization (BBO), especially their *Large-Scale* versions/variants. https://pypop.rtfd.io/
-
Updated
May 28, 2024 - Python
PyPop7: A Pure-Python Library for POPulation-based Black-Box Optimization (BBO), especially their *Large-Scale* versions/variants. https://pypop.rtfd.io/
Official implementation for the paper "Model-based Diffusion for Trajectory Optimization". Model-based diffusion (MBD) is a novel diffusion-based trajectory optimization framework that employs a dynamics model to run the reverse denoising process to generate high-quality trajectories.
0th order optimizers, gradient chaining, random gradient approximation
Hard-Thresholding Meets Evolution Strategies in Reinforcement Learning
Blockwise Direct Search
Powell's Derivative-Free Optimization solvers.
Benchmarking optimization solvers.
[ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".
Official implementation for the paper "CoVO-MPC: Theoretical Analysis of Sampling-based MPC and Optimal Covariance Design" accepted by L4DC 2024. CoVO-MPC is an optimal sampling-based MPC algorithm.
[ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu
Elo ratings for global black box derivative-free optimizers
[NeurIPS 2023] “SODA: Robust Training of Test-Time Data Adaptors”
PRIMA: Reference Implementation for Powell's methods with Modernization and Amelioration
This repository contains the PyTorch implementation of Zeroth Order Optimization Based Adversarial Black Box Attack (https://arxiv.org/abs/1708.03999)
Robustify Black-Box Models (ICLR'22 - Spotlight)
Code for IEEE MLSP 2021 paper titled "Model-Free Learning of Optimal Deterministic Resource Allocations in Wireless Systems via Action-Space Exploration"
Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling
Implementation of the algorithms described in the papers "ZO-AdaMM: Zeroth Order Adaptive Momentum" by Chen et al., "Stochastic first- and zeroth-order methods" by Ghadimi et al. and "SignSGD via zeroth- order oracle" by Liu et al.
SCOBO: Sparsity-aware Comparison Oracle Based Optimization
Nevergrad Optimizer Benchmarking for 3D Performance Capture
Add a description, image, and links to the zeroth-order-optimization topic page so that developers can more easily learn about it.
To associate your repository with the zeroth-order-optimization topic, visit your repo's landing page and select "manage topics."