You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It should be pretty easy to update solve_and_differentiate to not pre-compute anything for the derivative if the user doesn't need this. Otherwise if somebody wants to use this for the derivative sometimes (but not always) then they'll incur some additional performance overhead for the derivative pre-computation even though they don't use it. This also makes the timing results of the forward/backward passes slightly off as time that should be measured in the backward pass is actually present in the forward pass. I think this overhead might even larger for the explicit mode in #2 that calls into cone_lib.dpi_explicit.
I just tried running the following quick example and this part seems to add ~15% overhead
#!/usr/bin/env python3importnumpyasnpfromscipyimportsparseimportdiffcpnzero=100npos=100nsoc=100m=nzero+npos+nsocn=100cone_dict= {
diffcp.ZERO: nzero,
diffcp.POS: npos,
diffcp.SOC: [nsoc]
}
A, b, c=diffcp.utils.random_cone_prog(m, n, cone_dict)
x, y, s, D, DT=diffcp.solve_and_derivative(A, b, c, cone_dict)
# evaluate the derivativenonzeros=A.nonzero()
data=1e-4*np.random.randn(A.size)
dA=sparse.csc_matrix((data, nonzeros), shape=A.shape)
db=1e-4*np.random.randn(m)
dc=1e-4*np.random.randn(n)
dx, dy, ds=D(dA, db, dc)
# evaluate the adjoint of the derivativedx=cdy=np.zeros(m)
ds=np.zeros(m)
dA, db, dc=DT(dx, dy, ds)
Line# Hits Time Per Hit % Time Line Contents==============================================================55 @profile56defsolve_and_derivative(A, b, c, cone_dict, warm_start=None, **kwargs):
...
128179706.079706.084.1result=scs.solve(data, cone_dict, **kwargs)
129130# check status13116.06.00.0status=result["info"]["status"]
13214.04.00.0ifstatus=="Solved/Innacurate":
133warnings.warn("Solved/Innacurate.")
13414.04.00.0elifstatus!="Solved":
135raiseSolverError("Solver scs returned status %s"%status)
13613713.03.00.0x=result["x"]
13814.04.00.0y=result["y"]
13913.03.00.0s=result["s"]
140141# pre-compute quantities for the derivative14217.07.00.0m, n=A.shape14314.04.00.0N=m+n+1144114.014.00.0cones=cone_lib.parse_cone_dict(cone_dict)
145121.021.00.0z= (x, y-s, np.array([1]))
14614.04.00.0u, v, w=z14711850.01850.02.0D_proj_dual_cone=cone_lib.dpi(v, cones, dual=True)
14815.05.00.0Q=sparse.bmat([
1491271.0271.00.3 [None, A.T, np.expand_dims(c, -1)],
1501299.0299.00.3 [-A, None, np.expand_dims(b, -1)],
15114230.04230.04.5 [-np.expand_dims(c, -1).T, -np.expand_dims(b, -1).T, None]
152 ])
15312878.02878.03.0M=splinalg.aslinearoperator(Q-sparse.eye(N)) @ dpi(
15413301.03301.03.5z, cones) +splinalg.aslinearoperator(sparse.eye(N))
1551445.0445.00.5pi_z=pi(z, cones)
15611742.01742.01.8rows, cols=A.nonzero()
The text was updated successfully, but these errors were encountered:
It should be pretty easy to update
solve_and_differentiate
to not pre-compute anything for the derivative if the user doesn't need this. Otherwise if somebody wants to use this for the derivative sometimes (but not always) then they'll incur some additional performance overhead for the derivative pre-computation even though they don't use it. This also makes the timing results of the forward/backward passes slightly off as time that should be measured in the backward pass is actually present in the forward pass. I think this overhead might even larger for the explicit mode in #2 that calls intocone_lib.dpi_explicit
.I just tried running the following quick example and this part seems to add ~15% overhead
The text was updated successfully, but these errors were encountered: