Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: Can't pickle local object 'create_lossOdeint.<locals>.lossOdeint' #40

Open
gasilva opened this issue May 19, 2020 · 0 comments

Comments

@gasilva
Copy link

gasilva commented May 19, 2020

I got this error when trying to use PDE

Traceback (most recent call last):
  File "/home/ats4i/Desktop/corona/dataAndModelsCovid19/dataFit_SEAIRD_v2YaboxEvolutionaryParams.py", line 1036, in <module>
    main(countriesExt)
  File "/home/ats4i/Desktop/corona/dataAndModelsCovid19/dataFit_SEAIRD_v2YaboxEvolutionaryParams.py", line 660, in main
    results = ray.get(results)
  File "/home/ats4i/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 2349, in get
    raise value
ray.exceptions.RayTaskError: ray_Learner:train() (pid=117047, host=jedha)
  File "/home/ats4i/Desktop/corona/dataAndModelsCovid19/dataFit_SEAIRD_v2YaboxEvolutionaryParams.py", line 337, in train
    for step in de.geniterator():
  File "/home/ats4i/anaconda3/lib/python3.7/site-packages/yabox/algorithms/de.py", line 172, in geniterator
    for step in it:
  File "/home/ats4i/anaconda3/lib/python3.7/site-packages/yabox/algorithms/de.py", line 203, in iterator
    it = PDEIterator(self)
  File "/home/ats4i/anaconda3/lib/python3.7/site-packages/yabox/algorithms/de.py", line 70, in __init__
    super().__init__(de)
  File "/home/ats4i/anaconda3/lib/python3.7/site-packages/yabox/algorithms/de.py", line 9, in __init__
    self.fitness = de.evaluate(self.population)
  File "/home/ats4i/anaconda3/lib/python3.7/site-packages/yabox/algorithms/de.py", line 161, in evaluate
    return self.evaluate_denormalized(PD)
  File "/home/ats4i/anaconda3/lib/python3.7/site-packages/yabox/algorithms/de.py", line 213, in evaluate_denormalized
    return list(self.pool.map(self.fobj, PD, chunksize=self.chunksize))
  File "/home/ats4i/anaconda3/lib/python3.7/multiprocessing/pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/ats4i/anaconda3/lib/python3.7/multiprocessing/pool.py", line 657, in get
    raise self._value
  File "/home/ats4i/anaconda3/lib/python3.7/multiprocessing/pool.py", line 431, in _handle_tasks
    put(task)
  File "/home/ats4i/anaconda3/lib/python3.7/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/home/ats4i/anaconda3/lib/python3.7/multiprocessing/reduction.py", line 51, in dumps
    cls(buf, protocol).dump(obj)

the function to be minimized

def create_lossOdeint(data, recovered, \
            death, s_0, e_0, a_0, i_0, r_0, d_0, startNCases, \
                 weigthCases, weigthRecov):
    def lossOdeint(point):
        size = len(data)
        beta, beta2, sigma, sigma2, sigma3, gamma, b, mu = point
        def SEAIRD(y,t):
            S = y[0]
            E = y[1]
            A = y[2]
            I = y[3]
            R = y[4]
            p=0.2
            # beta2=beta
            y0=-(beta2*A+beta*I)*S+mu*S #S
            y1=(beta2*A+beta*I)*S-sigma*E-mu*E #E
            y2=sigma*E*(1-p)-gamma*A-mu*A #A
            y3=sigma*E*p-gamma*I-sigma2*I-sigma3*I-mu*I #I
            y4=b*I+gamma*A+sigma2*I-mu*R #R
            y5=(-(y0+y1+y2+y3+y4)) #D
            return [y0,y1,y2,y3,y4,y5]

        y0=[s_0,e_0,a_0,i_0,r_0,d_0]
        tspan=np.arange(0, size, 1)
        res=odeint(SEAIRD,y0,tspan) 
        #,hmax=0.01)

        tot=0
        l1=0
        l2=0
        l3=0
        for i in range(0,len(data.values)):
            if data.values[i]>startNCases:
                l1 = l1+(res[i,3] - data.values[i])**2
                l2 = l2+(res[i,5] - death.values[i])**2
                l3 = l3+(res[i,4] - recovered.values[i])**2
                tot+=1
        l1=np.sqrt(l1/max(1,tot))
        l2=np.sqrt(l2/max(1,tot))
        l3=np.sqrt(l3/max(1,tot))
        
        #weight for cases
        u = weigthCases
        #weight for recovered
        w = weigthRecov 
        #weight for deaths
        v = max(0,1. - u - w)
        return u*l1 + v*l2 + w*l3 
    return lossOdeint

the call to minimizer

        bounds=[(1e-12, .2),(1e-12, .2),(1/60 ,0.4),(1/60, .4),
        (1/60, .4),(1e-12, .4),(1e-12, .4),(1e-12, .4)]

        maxiterations=1000
        f=create_lossOdeint(self.data, self.recovered, \
            self.death, self.s_0, self.e_0, self.a_0, self.i_0, self.r_0, self.d_0, self.startNCases, \
                 self.weigthCases, self.weigthRecov)
        de = PDE(f, bounds, maxiters=maxiterations)
        i=0
        with tqdm(total=maxiterations*500) as pbar:
            for step in de.geniterator():
                idx = step.best_idx
                norm_vector = step.population[idx]
                best_params = de.denormalize([norm_vector])
                pbar.update(i)
                i+=1
        p=best_params[0]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant