Deep Learning/Tensorflow

tf.nn.softmax_cross_entropy_with_logits_v2

고슴군 2019. 10. 10. 11:14

tf.nn.softmax_cross_entropy_with_logits_v2 가 수행하는 3가지 연산

 

  1. logits인 (y_hat)에 softmax 함수 적용 (정규화를 위해) : y_hat_softmax = softmax(y_hat)
  2. cross-entropy loss 계산 : y_cross = y_true * tf.log(y_hat_softmax)
  3. 각 데이터로부터 각각 loss 값 계산 : -tf.reduce_sum(y_cross, reduction_indices=[1])

아래의 코드는 이것을 완벽하게 보여준다. 

 

코드 : 

y_true = tf.convert_to_tensor(np.array([[0.0, 1.0, 0.0],[0.0, 0.0, 1.0]]))
y_hat = tf.convert_to_tensor(np.array([[0.5, 1.5, 0.1],[2.2, 1.3, 1.7]]))

# first step
y_hat_softmax = tf.nn.softmax(y_hat)

# second step
y_cross = y_true * tf.log(y_hat_softmax)

# third step
result = - tf.reduce_sum(y_cross, 1)

# use tf.nn.softmax_cross_entropy_with_logits_v2
result_tf = tf.nn.softmax_cross_entropy_with_logits_v2(labels = y_true, logits = y_hat)

with tf.Session() as sess:
    sess.run(result)
    sess.run(result_tf)
    print('y_hat_softmax:\n{0}\n'.format(y_hat_softmax.eval()))
    print('y_true: \n{0}\n'.format(y_true.eval()))
    print('y_cross: \n{0}\n'.format(y_cross.eval()))
    print('result: \n{0}\n'.format(result.eval()))
    print('result_tf: \n{0}'.format(result_tf.eval()))

 

 

결과 :

y_hat_softmax:
[[0.227863   0.61939586 0.15274114]
[0.49674623 0.20196195 0.30129182]]

y_true: 
[[0. 1. 0.]
[0. 0. 1.]]

y_cross: 
[[-0.         -0.4790107  -0.        ]
[-0.         -0.         -1.19967598]]

result: 
[0.4790107  1.19967598]

result_tf: 
[0.4790107  1.19967598]

 

 

 

[출처] https://stackoverflow.com/questions/49377483/about-tf-nn-softmax-cross-entropy-with-logits-v2

반응형

'Deep Learning > Tensorflow' 카테고리의 다른 글

return_state, return_sequences  (0) 2021.02.02
Tensorflow 설치  (0) 2021.01.14