/
explorer.py
130 lines (108 loc) · 3.9 KB
/
explorer.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
"""
Parameter exploration module to explore:
* Hyperparameters
* Model parameters
* Structure and complexity
* Memory relationships
* Cache relationships
* Processing relationships
Hyper-parameters: An architecure requires several tunable
scalars hyperparameter such as learning rate, momentum,
decay while non-convex optimization iterative search.
While some hyper-parameter setting can take estimator
to best possible configuration, others may never lead
to solution due to overshooting.
Model parameters: A good modelling is characterized by
a good micro-architecure or macro-architecure parameter
setting, giving possibility of evaluating the concept
on a different statement. Given a set of parameter
associated to architecure design itself, best can lead
both to accuracy and efficiency.
Structure and complexity: Anything other than micro
architecure design space exploration can be put in
this unstructured complexity of network.
Memory relationships: RAM in von-neuman model is
an essential block, and will remain untill some
breakthrough such os "optane" happens. Figuring out
the hidden relationship between RAM access and model architecure
(if any) is thus an essential exersise.
Cache relationships: Caches palys an important role
when it comes to relatively larger models (that cannot
fit into it at once). The access patterns and model
complexity as to do many thing with each other (with
compute kernels and testing conditions more or less
same).
Processing relationships: The frequency and temperature
(as controlled by operating system kernel) has to do many
thing with efficiency of an architected model. Fixing this
solves the problem, but if it becomes a variable, things
start becoming more interesting.
This module is GPLv3 licensed.
"""
import tensorflow as tf
import os
import profile_tf as profiler
import mobilenet_v1 as mobile
import squeezenet as squeeze
import report as report
"""
We dont save graph, and assume default graph is
all we have
"""
class model_generator:
"""
options is of dictionary type, param has to be
one of the key.
"""
def __init__(self, options = None, name = 'default'):
self.name = name
if options != None:
self.nr_param = len(options['parameters'])
self.init_param = options['parameters']
self.model_creator = options['model_creator']
self.update_stats = options['stat_updater']
self.param = options[param]
self.others = options
# do something with other options here
self.model = self.model_creator(self.init_param)
self.model_stats = self.update_stats()
else:
self.nr_param = -1
self.init_param = None
self.model_creator = None
self.param = None
self.model = None
self.model_stats = None
def set_creator(self, fn):
self.model_creator = fn
def set_stats_updater(self, fn):
self.update_stats = fn
def set_param(self, param):
# do param range checks
# do checks with respect to current setting
self.param = param
def get_param(self):
return self.param
def generate_model(self):
# logic to generate model basef on self.param
# checks if it is in correct range
tf.reset_default_graph()
self.model = self.model_creator(self.param)
def get_model(self):
return self.model
def set_and_get_model(self, param):
# do param range checks
# do checks with respect to current setting
self.param = param
self.model = generate_model()
return self.model
def get_model_stats(self):
return self.update_stats()
def set_and_stats(self, param):
self.param = param
self.model = self.generate_model()
return self.get_model_stats()
def main():
pass
if __name__ == "__main__":
main()