I Just Have Submitted My Project Into K.O.I


I just have submitted my project about A.I into Korean Olympiad of Information about 10 min ago.

I hope I will pass on this submission. 🙂

This year competition is changed since last one. It should submit document of project first in this time. And if the document is selected, then we could get some lecture about C.S! Next step is same as last year. Presentation, Awarding, and then ISEF-K. The lecture is the most difference from past.

In past competitions, to attend this competition required advanced knowledge, because organizer didn’t give any helps to students.

But now, It isn’t! So I think much more students, who are interested on Computer Science, then last year are coming to this competition! Attending more competitors will make the competition more harder, so I am little worried about it. 🙁

First semester exam is ended!

For these about a month, I couldn’t wrote some post on codex because of semester exam.

In Korea, students have to take semester exam 2 times per semester.

It’s heard weird but I tried to say we took 2 big exam per semester.

So my first semester exam on Korea was just fine (I think I will take about average grade.).

From now, I have to rush to complete my project which will be submitted on this year compitition(Korean Olympiad of Information) untill 15th May!

I think this schedule is pretty hard but I will do my best. XD


P.S. I made new logo of our studio! 🙂

[AI] MNIST Handwritten Digits Recognizer – Single Layered N.N(Neural Network)


I made my first A.I program in this time.

I read a book that is a really good guide to start neural network (N.N).

This book was the most helpful book ever to understand N.N.

I strongly recommend this book, if you are interested in N.N.

And it has little example MNIST recolonization project.

It is implement of single hidden layer N.N model. I used 1568 neurons in hidden layer.

How this works

This program is built to solve MNIST handwritten letter recognize problem.

It read train data as CVS file that provided from internet.

And then query test data to trained network. And it return accuracy of network.

MNIST Data Set

Neural network is huge set of artificial neuron (called as perceptron).

Neuron receive inputs and it return an output if inputs are enough to make output.

If a neuron receive 1,000 of inputs, then multiply each inputs with each weight of input nodes.

And then get sum all of multiplied values.

It will be use to make output, With using activation function (I use sigmoid function in this program. In other ways, there are ReLU, etc.)

We can train neuron by changing weights of inputs. It called back propagation and training.

Back-propagation of errors is processed by how weight is strong.

If two node is connected to a neuron. And it occurred 0.5 error. First weight of node is 1.0 and other is 0.2.

Then node 1’s error is {1.0/(1.0 + 0.5)} * 0.5.

So, nodes gets each’s errors.

And training is processed using differential of activation function.

I cannot understand well training process and differential of activation function.

If you can describe it well, please comment about it!


What’s happening in the video?

Until 3:05 of video is training scene.

You can see ASCIIs(training data, ASCII art is always funny), epoch(how many times that train is repeated), train(answer of training data), percent, and learning per seconds, in this step.

And then user should enter how many count test is processing. In MNIST test database, it contains 10,000 test data records.

You can see real time accuracy (I typed occuracy in this program, not accuracy. it is my typo xD).

This model’s accuracy is about 94%.

Changes from last update

04 April `17 – Optimize memory usage while training; Fix reduced console output.

Source Code

GitHub Repository

[AI] Keras Installing Guide With Anaconda (really easy)

Keras Homepage

Keras is COOL deep learning library for Theno and TensorFlow, and written on python.

I tried to install it with Anaconda because I use Windows.

Anaconda is powerful tool to make virtual environment (not virtual machine) (and it supports macOS, and linux too)

So it make easier installing something like tensor-flow, whatever through the pip…


At first, you should install Anaconda in your Windows.


After installing Anaconda, open a CMD(or Terminal in Mac, or Linux), and create new Anaconda environment that use Python 3.5

  1. conda create --name keras python=3.5

Okay, you just created an Anaconda environment named “keras”.

To use the environment, we should activate the environment.

  1. activate keras

If you can see “(keras)” in front of each line, you successfully enter environment.

Next step is installing libraries.

  1. conda install theano
  2. conda install mingw libpython
  3. pip install tensorflow
  4. pip install keras

That’s all! Let’s test it out.

  1. python -c "from keras import backend; print(backend._BACKEND)"

stackoverflow article

About this year plans

Basically, baseline of schedule will be similar from last year.

But in this time, I will attend on two type of competition.

One of it is testing computer skills with coding problems, and other one is project competition.

Testing competition starts on April and Project competition starts on May and ends on September.

Oh and Symphony will be released on this year(maybe xD)!

KOI – 2016 Category Is Closed.

Ha ha! This project category (koi 2016) is closed, because I failed to pass it!

🙂 Yeah I CAN’T GO TO AMERICA! I was so sad about it.

I think the most reason that why I failed, It is English. Actually that’s all reason why “we” failed.

I did the my best for this competition. But failed.

BUT! I cannot stop here. I will challenge to next competitions!

[Symphony] Progress report since 12/14/2016 to 03/06/2017

Hi guys, it took so much long time since my last progress report of this project.


  • Completed translation
  • Fix Some BUGS.


  • Add YOU-TUBE STREAMING (My dream comes true). And It is loaded asynchronously!
  • Now we can change album art just with Symphony.
  • Add Playlist Item Drop Panel while Dragging Item
  • Now support BUILT-IN SKIN EDITOR! Can use image, color, gradients to skinning!
  • Improve UX of DSP editor
  • Add normalizer (lua)
  • Can add musics in a folder


  • Finally Symphony use Direct2d instead of WPF DrawingContext (that ultra-fast 2d rendering library based on Direct3d) to show Visualizers
    • Now support FULLY hardware accellation. REAL 60fps!
  • Changes in mini-controller; Add background album-art in controller window
  • Add some re-samplers in spectrum analyzers
  • Can show audio value same as other audio graph
  • Can change VU meter sensitivity


  • Add layout system
  • Add image content
  • Add animation preset
  • Change editor UI.

I thought there would not be much to write on… but it wasn’t