Saturday, November 17, 2018

The "local search" problem  -  part 2

Last time we talked about the problem of continuously having new data to consider which makes us re-train our model, this time, we will try to see how studies are trying to find a solution by reducing the dimension (X in the last example) of the datasets.
The feature selection for classification can be seen as a combinatorial optimization problem whose objective is to select a subset of n attributes from an initial set of N attributes such as n<N and quality of a solution can be the quality of the classification model constructed using this subset of attributes. The search space contains (2^n) solutions, thus, it is considered as an NP-hard problem, and often approximate methods such as metaheuristics are used to find a good quality solution in a reasonable time.
In a previous work, a metaheuristic based on local search called Bee Swarm Optimization (BSO), was used to solve the feature selection (FS) problem, and to “take you back to when it all first started


A clue : it’s a song :D
Our project consists of introducing reinforcement learning to this existing method, so first I’m going to briefly present the algorithm, and what could be improved in the local search.

The Bee Swarm Optimization (BSO) metaheuristic is inspired by the foraging behavior of real bees : a bee starts by leaving the hive in order to find a flower and gather nectar. Then, it returns to the hive and unloads the nectar. If the food source is rich, the bee communicates its direction and distance to its mates via a dance. The other bees naturally follow the one that indicates the best food source.

First, an initial solution is generated randomly or via a heuristic. This solution will be the reference solution, refSol, from which a search region, a set of candidate solutions, is determined. After that, each solution is assigned to a bee as a starting point of a local search. At the end of the search, each bee communicates its best found solution to the other ones through a table, named dance.

The problem with the actual definition of best found solution is that we consider a solution as a vector of bits mapped to the actual set of features ( 1 the feature is used , 0 not used ), and the quality of a solution is the classification accuracy returned by the used subset. The problem with that is that we do not consider the effect or contribution of an added/removed feature, and we think that this is a great opportunity to use reinforcement learning methods, and try to find a good policy to keep the most relevant features based on the effect of a subset of features.

If you have any comments, suggestions let me know in the comments, I could really use some help ( in fact, I switched to medium based on a friend’s suggestion, for me it’s all the same )

Thursday, November 15, 2018

The "local search" problem - part 1

This week, since we are 2 working on this project, we had to start thinking in a deep and formal way to optimize the already existing metaheuristic ... Oh I forgot, I didn't explain how we got here, so I'm
going to start by first introducing our project.
Supervised classification is a very important task in data mining and a part of the machine learning techniques, it is affecting objects into groups that have the same characteristics based on a set of features or attributes. To simplify things, imagine your data as a table that has a number of columns same as the number of features, let’s call it X, and in addition, you have a number of lines equals the number of your clients, called Y ( when talking about a database in a company for example ), now you’d have a table with a size of ( X x Y ) element at a moment t1.
Now imagine you train your build a model based on this table ( X, Y , t1 ) to try and predict certain future data, but you have a new client who just got into your database, so you need to update your model, now you build a new model based on ( X , Y+1 , t2 ), but hey, 4 more clients came in just moments after you build your model, it becomes (X,Y+5, t3), and while your database grows, the cost of training a new model each time becomes higher after each new model, you should know that the cost to calculate the determinant of a 25x25 matrix is too high, even for a computer, then imagine a 1 00 x 1 000 000 or even more, and this is just one time, and that's what we call the curse of dimensionality.



So people who do research said, since we have no control over new instances of data ( the lines ), let's try and reduce the number of attributes ( the columns ), but the problem with this approach is that ...

Well that would be something for the next article ( a way for me to commit to writing this time since I have that OCD for finishing things I start ).

PS : I wanted to say that, anyone should be proud of what he is/do now, because it is a part of what you will be tomorrow, and for me, this kind of stuff is my mindset now, and after some years, even I won't have the same ideas, that doesn't contradict the fact that I was this way at a certain moment, and it will always be a part of who I am.

Tuesday, November 13, 2018

Why a "blog" & why now ?


If you ask me why I'm doing this, I'd be like :


The blog concept is now "deprecated", so why would I try to have mine now ?
You can say that I am an old school guy, who's only objective is to change the world bla bla bla ...
Nah more seriously, when you write in your personal blog, it's like writing a public diary where you have no pressure, and you feel that you can write whatever you want, and that's what I will be trying to do.
The main purpose of this blog is an initiative to share the knowledge I acquired during my 5 years as a computer engineering student at Ecole nationale Supérieure d'Informatique - ESI, Algiers, and will be acquired in the future. I felt the need to do this because I will graduate this year and I am working on using reinforcement learning to boost a metaheuristic called Bee Swarm optimization, for the feature selection problem, and to be honest, I've spend some time to figure things out ( when I wrote this, I had 2 months and a half working on the project ), because when you read articles, you have so many points of view and each researcher or group of researchers has his own perspective, which means that, your contribution could be to do the abstraction of the studies, and make a sort of survey ( a concept that exists, but still, you can't understand the fundamentals just by reading surveys, you need to do you own ).

I actually tried to open my personal blog once, wrote 2-3 articles then stopped, I hope this time it will be different, because I think that this time, I have a solid content to share.

And now ...

Q-LocalSearch

“This time I won’t make any silly jokes or references”, that’s what SHE said ! In today’s article, I’m gonna try to explain to you what I...