- Tensorflow 주요 기능 정리
: filed name을 지정하지 않고 network를 구성하고
훈련 파라미터들과 주요 tensor 접근 및 데이터 출력해보기
1. 입력 채널 : 1
2. conolution layer[11X11] output channels 10
3. Bias
4. convolution layer[5X5] output channels 20
5. convolution layer[3X3] output channels 50
import tensorflow as tf import numpy as np net_inputs = tf.placeholder(tf.float32,[None, None, None,1]) net_ouputs = tf.placeholder(tf.float32,[None, None, None,1]) # input: A Tensor. Must be one of the following types: half, bfloat16, float32, float64. A 4-D tensor. The dimension order is interpreted according to the value of data_format, see below for details. # filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] # strides: A list of ints. 1-D tensor of length 4. The stride of the sliding window for each dimension of input. The dimension order is determined by the value of data_format, see below for details. # padding: A string from: "SAME", "VALID". The type of padding algorithm to use. # use_cudnn_on_gpu: An optional bool. Defaults to True. # data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width]. # dilations: An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1. # name: A name for the operation (optional). # 1st layer : W*X + B W = tf.Variable( tf.random_normal([11, 11, 1, 10])) B = tf.Variable( tf.random_normal([10])) net = tf.nn.conv2d(net_inputs, filter=W, strides = [1,4,4,1], padding='SAME') net = tf.nn.bias_add(net, B) net = tf.nn.relu(net) W = tf.Variable( tf.random_normal([5, 5, 10, 20])) # 2nd layer : W*X net = tf.nn.conv2d(net, filter=W, strides = [1,2,2,1], padding='SAME') net = tf.nn.relu(net) # 3rd layer : W*X W = tf.Variable( tf.random_normal([3, 3, 20, 50])) net = tf.nn.conv2d(net, filter=W, strides = [1,2,2,1], padding='SAME') net = tf.nn.relu(net)
1. 훈련 파라미터 확인 : Convolution filter와 bias
코드:
print('trainbale weight 확인') train_var = [var for var in tf.trainable_variables() ] for tvar in train_var: print(tvar)
결과 :
2. 모든 tensor 확인 :
- field name을 별도로 지정하지 않았으므로 모든 tensor를 로드하고 이 후 원하는(convolution, bias)layer만
추출해본다.
- name을 별도로 지정했을 경우 " tf.get_default_graph().get_tensor_by_name()"을 이용하여 tensor를 로드가능
코드 :
# tensor 확인하기 print('tensor 확인') # tensors = [ tf.get_default_graph().get_tensor_by_name(tensor.name) for tensor in tf.global_variables()] alltensors = [ tensor.values()[0] for tensor in tf.get_default_graph().get_operations()] # print(tensors) for tnsr in alltensors: print(tnsr)
결과 :
tensor 확인 Tensor("Placeholder:0", shape=(?, ?, ?, 1), dtype=float32) Tensor("Placeholder_1:0", shape=(?, ?, ?, 1), dtype=float32) Tensor("random_normal/shape:0", shape=(4,), dtype=int32) Tensor("random_normal/mean:0", shape=(), dtype=float32) Tensor("random_normal/stddev:0", shape=(), dtype=float32) Tensor("random_normal/RandomStandardNormal:0", shape=(11, 11, 1, 10), dtype=float32) Tensor("random_normal/mul:0", shape=(11, 11, 1, 10), dtype=float32) Tensor("random_normal:0", shape=(11, 11, 1, 10), dtype=float32) Tensor("Variable:0", shape=(11, 11, 1, 10), dtype=float32_ref) Tensor("Variable/Assign:0", shape=(11, 11, 1, 10), dtype=float32_ref) Tensor("Variable/read:0", shape=(11, 11, 1, 10), dtype=float32) Tensor("random_normal_1/shape:0", shape=(1,), dtype=int32) Tensor("random_normal_1/mean:0", shape=(), dtype=float32) Tensor("random_normal_1/stddev:0", shape=(), dtype=float32) Tensor("random_normal_1/RandomStandardNormal:0", shape=(10,), dtype=float32) Tensor("random_normal_1/mul:0", shape=(10,), dtype=float32) Tensor("random_normal_1:0", shape=(10,), dtype=float32) Tensor("Variable_1:0", shape=(10,), dtype=float32_ref) Tensor("Variable_1/Assign:0", shape=(10,), dtype=float32_ref) Tensor("Variable_1/read:0", shape=(10,), dtype=float32) Tensor("Conv2D:0", shape=(?, ?, ?, 10), dtype=float32) Tensor("BiasAdd:0", shape=(?, ?, ?, 10), dtype=float32) Tensor("Relu:0", shape=(?, ?, ?, 10), dtype=float32) Tensor("random_normal_2/shape:0", shape=(4,), dtype=int32) Tensor("random_normal_2/mean:0", shape=(), dtype=float32) Tensor("random_normal_2/stddev:0", shape=(), dtype=float32) Tensor("random_normal_2/RandomStandardNormal:0", shape=(5, 5, 10, 20), dtype=float32) Tensor("random_normal_2/mul:0", shape=(5, 5, 10, 20), dtype=float32) Tensor("random_normal_2:0", shape=(5, 5, 10, 20), dtype=float32) Tensor("Variable_2:0", shape=(5, 5, 10, 20), dtype=float32_ref) Tensor("Variable_2/Assign:0", shape=(5, 5, 10, 20), dtype=float32_ref) Tensor("Variable_2/read:0", shape=(5, 5, 10, 20), dtype=float32) Tensor("Conv2D_1:0", shape=(?, ?, ?, 20), dtype=float32) Tensor("Relu_1:0", shape=(?, ?, ?, 20), dtype=float32) Tensor("random_normal_3/shape:0", shape=(4,), dtype=int32) Tensor("random_normal_3/mean:0", shape=(), dtype=float32) Tensor("random_normal_3/stddev:0", shape=(), dtype=float32) Tensor("random_normal_3/RandomStandardNormal:0", shape=(3, 3, 20, 50), dtype=float32) Tensor("random_normal_3/mul:0", shape=(3, 3, 20, 50), dtype=float32) Tensor("random_normal_3:0", shape=(3, 3, 20, 50), dtype=float32) Tensor("Variable_3:0", shape=(3, 3, 20, 50), dtype=float32_ref) Tensor("Variable_3/Assign:0", shape=(3, 3, 20, 50), dtype=float32_ref) Tensor("Variable_3/read:0", shape=(3, 3, 20, 50), dtype=float32) Tensor("Conv2D_2:0", shape=(?, ?, ?, 50), dtype=float32) Tensor("Relu_2:0", shape=(?, ?, ?, 50), dtype=float32)
3. Convolution / Bias Tensor 확인
코드 :
convtesnor = [] for tnsr in alltensors: # print(tnsr) if( 'Conv2D' in tnsr.name or 'BiasAdd' in tnsr.name ): convtesnor.append( tnsr ) print('convolution tensor') for tnsr in convtesnor: # print(tf.get_default_graph().get_tensor_by_name(tnsr)) print(tnsr)
결과 :
convolution tensor Tensor("Conv2D:0", shape=(?, ?, ?, 10), dtype=float32) Tensor("BiasAdd:0", shape=(?, ?, ?, 10), dtype=float32) Tensor("Conv2D_1:0", shape=(?, ?, ?, 20), dtype=float32) Tensor("Conv2D_2:0", shape=(?, ?, ?, 50), dtype=float32)
4. session 구동시켜서 각 Layer들의 state 확인
: 입력을 200X200X1 로 설정
코드 :
inputs = np.random.randn(1,200,200,1) # sess = tf.Session() sess.run( tf.global_variables_initializer()) for tnsr in convtesnor: print(tnsr) currrent_graph = sess.run( tnsr, feed_dict={net_inputs:inputs}) print( currrent_graph.shape )
결과 :
Tensor("Conv2D:0", shape=(?, ?, ?, 10), dtype=float32) (1, 50, 50, 10) Tensor("BiasAdd:0", shape=(?, ?, ?, 10), dtype=float32) (1, 50, 50, 10) Tensor("Conv2D_1:0", shape=(?, ?, ?, 20), dtype=float32) (1, 25, 25, 20) Tensor("Conv2D_2:0", shape=(?, ?, ?, 50), dtype=float32) (1, 13, 13, 50)
'프로그래밍 > Tensorflow' 카테고리의 다른 글
Tensorflow Export(Protocol buff) + OpenCV Load(dnn) (0) | 2018.06.11 |
---|---|
OpenCV +Tensorflow (0) | 2018.06.11 |
OpenCV + Tensorflow + PB (0) | 2018.06.08 |
텐서플로우 Train 모델 C에서 Load하기 (0) | 2018.06.07 |
Tensorflow C++ build (0) | 2018.06.04 |