Long time no see, my blog.

I’m still alive here. I think I will post about competition later too. It is over in September, but I need to work on next one, so I don’t have time to keep updating blog ;-;

And I live in GitHub usually in these days. I spent my whole thanks giving holidays on GitHub. 🙂

You are welcome to visit my GitHub repos.


Prograss Report, Vision : Cross-Platform Tensorflow, OpenCV Wrapping Library

For “Neural Action”, my Koi project, I needed OpenCV and Tensorflow on cross-platform and C# language support.

So I created OpenCV wrapper for Xamarin, that C# cross-platform framework, and I ported legendary migueldeicaz’s project into Xamarin.

The result of this combination is computer vision library with tensorflow!

In my project, I will use both for eye gaze estimation with CNN and word suggestion with RNN (seq2seq).

I am really afraid this method is working properly because there are not any success examples…


Github Links

Vision: https://github.com/gmlwns2000/Vision

Tensorflow for Xamarin: https://github.com/gmlwns2000/TensorFlowSharp

p.s. There is not any example on TensorFlowSharp repository. You can find some application implements on Vision repository.

p.s. Currently, TFSharp for Xamarin just supports Windows, Andorid, and UWP, because I don’t have any Mac 🙁

I Just Have Submitted My Project Into K.O.I


I just have submitted my project about A.I into Korean Olympiad of Information about 10 min ago.

I hope I will pass on this submission. 🙂

This year competition is changed since last one. It should submit document of project first in this time. And if the document is selected, then we could get some lecture about C.S! Next step is same as last year. Presentation, Awarding, and then ISEF-K. The lecture is the most difference from past.

In past competitions, to attend this competition required advanced knowledge, because organizer didn’t give any helps to students.

But now, It isn’t! So I think much more students, who are interested on Computer Science, then last year are coming to this competition! Attending more competitors will make the competition more harder, so I am little worried about it. 🙁

First semester exam is ended!

For these about a month, I couldn’t wrote some post on codex because of semester exam.

In Korea, students have to take semester exam 2 times per semester.

It’s heard weird but I tried to say we took 2 big exam per semester.

So my first semester exam on Korea was just fine (I think I will take about average grade.).

From now, I have to rush to complete my project which will be submitted on this year compitition(Korean Olympiad of Information) untill 15th May!

I think this schedule is pretty hard but I will do my best. XD


P.S. I made new logo of our studio! 🙂

[AI] MNIST Handwritten Digits Recognizer – Single Layered N.N(Neural Network)


I made my first A.I program in this time.

I read a book that is a really good guide to start neural network (N.N).

This book was the most helpful book ever to understand N.N.

I strongly recommend this book, if you are interested in N.N.

And it has little example MNIST recolonization project.

It is implement of single hidden layer N.N model. I used 1568 neurons in hidden layer.

How this works

This program is built to solve MNIST handwritten letter recognize problem.

It read train data as CVS file that provided from internet.

And then query test data to trained network. And it return accuracy of network.

MNIST Data Set

Neural network is huge set of artificial neuron (called as perceptron).

Neuron receive inputs and it return an output if inputs are enough to make output.

If a neuron receive 1,000 of inputs, then multiply each inputs with each weight of input nodes.

And then get sum all of multiplied values.

It will be use to make output, With using activation function (I use sigmoid function in this program. In other ways, there are ReLU, etc.)

We can train neuron by changing weights of inputs. It called back propagation and training.

Back-propagation of errors is processed by how weight is strong.

If two node is connected to a neuron. And it occurred 0.5 error. First weight of node is 1.0 and other is 0.2.

Then node 1’s error is {1.0/(1.0 + 0.5)} * 0.5.

So, nodes gets each’s errors.

And training is processed using differential of activation function.

I cannot understand well training process and differential of activation function.

If you can describe it well, please comment about it!


What’s happening in the video?

Until 3:05 of video is training scene.

You can see ASCIIs(training data, ASCII art is always funny), epoch(how many times that train is repeated), train(answer of training data), percent, and learning per seconds, in this step.

And then user should enter how many count test is processing. In MNIST test database, it contains 10,000 test data records.

You can see real time accuracy (I typed occuracy in this program, not accuracy. it is my typo xD).

This model’s accuracy is about 94%.

Changes from last update

04 April `17 – Optimize memory usage while training; Fix reduced console output.

Source Code

GitHub Repository