Script
#execution of python files
from PUFmodels import *
reload (PUFmodels); from PUFmodels import *
test = XORArbPUF(10, 64, 'equal')
test.numXOR
test.calc_features(test.generate_challenge(4))
xorKnackertester(32, 2, 0.05, 0.01, 10, array([10000]), 'Test')
----------------------------------------------------------------------------
PUFmodels
# mathematic models of PUF
- linArbPUF
''' linArbPUF provides methods to simulate the behaviour of a standard
Arbiter PUF (linear model)
attributes:
num_bits -- bit-length of the PUF
delays -- runtime difference between the straight connections
(first half) and crossed connection (second half) in every switch
parameter -- parameter vector of the linear model (D. Lim)
'''
- XORArbPUF
'''
XOR of serveral independent PUFs
'''
----------------------------------------------------------------------------
xorKnackertester
# attack of XOR-PUF
- xorKnackertester
# just a interface to xorKnacker
- xorKnacker
model = prodLinearPredictor(bitzahl + 1, numxor) # the prodLinearPredictor used for prediction
lesson = BasicTrainable(set, model, erf) # use this model for BasicTrainable
In class prodLinearPredictor(object):
self.indiv_linpredictor = [linearPredictor(dim, mean, stdev) for i in range(num_prod)]
self.indiv_linpredictor[predictor].shift_param([indiv_step]) # it actually uses shift_param of linearPredictor to change param step by step
So it comes to the basic function below:
##########################
def shift_param(self, step):
''' change parameter by amount of step
Keyword Arguments:
step -- single element list of 1D array wth dimension as self.parameter
Side Effects:
changes the instance variable parameter
Exeptions:
DimensionError -- dimension of step and self.parameter do not match
'''
step = step[0]
if step.shape != self.parameter.shape:
raise DimensionError
else: self.parameter += step
##########################
Execution test:
>xorKnackertester(32, 2, 0.05, 0.01, 10, array([10000]), 'Test')
10000
1 1.0001 0.5036
1 .) MCrate(train): 0.01 time since start: -0.337867975235
MCrate: (test) 0.0142 time since start: -0.344507932663
# self.iteration_count, total_grad, train_performance
# performanceTrain, start - time.time()
# performanceTest, start - time.time()
1 1.0001 0.4804
1 .) MCrate(train): 0.0089 time since start: -0.33918094635
MCrate: (test) 0.0124 time since start: -0.345828056335
1 1.0001 0.5077
1 .) MCrate(train): 0.0099 time since start: -0.284085035324
MCrate: (test) 0.0131 time since start: -0.290790081024
1 1.0001 0.4823
1 .) MCrate(train): 0.0098 time since start: -0.339424133301
MCrate: (test) 0.0088 time since start: -0.354150056839
1 1.0001 0.4978
1 .) MCrate(train): 0.0095 time since start: -0.31689786911
MCrate: (test) 0.0137 time since start: -0.323953866959
1 1.0001 0.5251
1 .) MCrate(train): 0.0094 time since start: -0.344790935516
MCrate: (test) 0.0092 time since start: -0.351211071014
1 1.0001 0.5008
1 .) MCrate(train): 0.01 time since start: -0.369421005249
MCrate: (test) 0.0129 time since start: -0.376559019089
1 1.0001 0.4864
1 .) MCrate(train): 0.0096 time since start: -0.235582113266
MCrate: (test) 0.012 time since start: -0.242256164551
1 1.0001 0.4934
1 .) MCrate(train): 0.0099 time since start: -0.406064033508
MCrate: (test) 0.0127 time since start: -0.412713050842
1 1.0001 0.5061
1 .) MCrate(train): 0.0088 time since start: -0.28179192543
MCrate: (test) 0.0122 time since start: -0.288369894028
finished
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
First line printed:
print self.iteration_count, total_grad, train_performance :
train_performance = self.mc_error.calc(lesson.trainset.targets,
lesson.response()
) / lesson.trainset.targets.shape[0]
error = sum(1 - targets.squeeze() * sign(response.squeeze()) ) / 2
It is equivalent to (1- (right-wrong) )/2.
shape: size of array
----------------------------------------------------------------------------
light
- xorKnackertester
- xorKnacker
----------------------------------------------------------------------------
predictor
# prediction using different models, including learning methods
Model:
- linearPredictor
- prodLinearPredictor
- FFNeuralNet
- SVMmodel
Transfer function
- Sigmoid
- Tanh
error estimation
- LRError
- MSError
- MCError
- MCC
Learning function
- RProp
- GradientDescent
- AnealingGradientDescent
Train:
- Trainable
- BasicTrainable
- Learner
- GradLearner
- CrossValidation
- Closures
- TrainData
- SubSampling
没有评论:
发表评论