Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem and algorithms ignore the torch.set_default_dtype #51

Open
miguelgondu opened this issue Jan 5, 2023 · 2 comments
Open

Problem and algorithms ignore the torch.set_default_dtype #51

miguelgondu opened this issue Jan 5, 2023 · 2 comments
Milestone

Comments

@miguelgondu
Copy link

miguelgondu commented Jan 5, 2023

I just ran into a problem when trying to run problems with double precision. I thought that defining torch.set_default_dtype(torch.float64) would be enough for evotorch to define all the tensors internally to be of double precision, but this is not the case.

Consider the following simple example of running CMA-ES for a single step:

import torch
import numpy as np

from evotorch import Problem
from evotorch.algorithms import CMAES

torch.set_default_dtype(torch.float64)

# A simple objective function
def objective_function(xy: torch.Tensor) -> torch.Tensor:
    x = xy[..., 0]
    y = xy[..., 1]
    return x + y


# Defining the problem
problem = Problem(
    "max",
    objective_function,
    bounds=[0.0, 1.0],
    solution_length=2,
    vectorized=True,
)

# Defining the searcher
cmaes = CMAES(
    problem,
    popsize=100,
    stdev_init=1.0,
    center_learning_rate=0.1,
    cov_learning_rate=0.1,
)

# Taking a single step
cmaes.step()

# Accessing the current best's dtype
# (thought it was float64, but **it's only float32**)
print(cmaes.get_status_value("pop_best").values.dtype)

If we want it to be float64, we have to specify it in the definition of problem. Indeed, running this with

problem = Problem(
    "max",
    objective_function,
    bounds=[0.0, 1.0],
    solution_length=2,
    vectorized=True,
    dtype=torch.float64,
)

gets us a best candidate with double precision. Why do we have to specify the type twice? Wouldn't we want Problem to inherit the default float dtype?

@miguelgondu
Copy link
Author

miguelgondu commented Jan 5, 2023

This could be fixed by modifying the default dtype in the definition of the problem: Check Line 908 of core.py.

Replacing

# Set the dtype for the decision variables of the Problem
if dtype is None:
    self._dtype = torch.float32
elif is_dtype_object(dtype):
    self._dtype = object
else:
    self._dtype = to_torch_dtype(dtype)

with

# Set the dtype for the decision variables of the Problem
if dtype is None:
    self._dtype = torch.get_default_dtype()
elif is_dtype_object(dtype):
    self._dtype = object
else:
    self._dtype = to_torch_dtype(dtype)

could do it, but I'm not sure. We'd also need to modify the default self._eval_dtype.

@engintoklu
Copy link
Collaborator

Hi @miguelgondu,

Thanks for trying out EvoTorch, for the issue report, and for the suggestions!

I should be able to identify the places (like the line you pointed out) where EvoTorch assumes float32 types, and then make those places use the current default dtype of PyTorch, in a consistent manner.

Until the fix arrives, I am hoping that instantiating your Problem instances like this will work for you:

problem = Problem(
    ...,
    dtype=torch.get_default_dtype(),
    ...,
)

Thanks!

@Higgcz Higgcz added this to the 0.5.0 milestone Jan 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants