Coder Social home page Coder Social logo

computational_intelligence's People

Contributors

peipi98 avatar

Stargazers

Alessio Carachino avatar Francesco Di Gangi avatar

Watchers

 avatar

computational_intelligence's Issues

Peer Review by s298885

Hello,
I have reviewed your code and tried to run it. I have a few things to say. First of all your code has some good organization, and it was very easier for me to understand and keep track of things. Secondly, I believe that you have surely tried to find optimum solution, but I believe the BFS here is not a very optimal choice since you did not go for a higher N (i tried it too and it didn't work as I believed it would). I believe DFS or some other approach can give you better results for a higher N. The results are nevertheless very optimal. Good work. :)

Peer review

Hi,
Overall, I think you did a good job because the algorithm works and finds solutions that seem to be good, near the optimum!
The README file is short, but clear; the code is not too difficult to understand, but a few comments might help.
The choice that caught my attention the most is the fact that, I think, you do a sort of initialization for the initial state (like an heuristic to fix something and reduce the computations (and the execution time), at the expense of a maybe worse solution): 'initial_state = State(np.array([input_state[0]]))'; I think that this should be like to impose a constraint in the research of the solution (but I do not know if this is wanted). Without initialization, so 'initial_state = State(np.array([]))', for N=20, the weight of the reached solution is equal to 24 (the weight of the solutions obtained with Dijkstra or A* is 23), but, obviously, the nodes and the states increases, instead, with initialization it is equal to 27 (the result written in the README file).
I believe that there is a redundant control, but it is a minor: in possible_actions(state) function, you verify if is_valid(state, item), so, I think that it is not necessary the control if not is_valid(state, a): continue, in search(...) function.
It is interesting the fact that the initial changes done on input_state delete the duplicates, so optimize the research: for example, passing, for n=5, from 25 lists to 10 tuples.

Peer review lab2

Hi,
Overall, I think that your code is clear and your algorithm works quite well; I like the explaination of the functions in the README file.
I think that working with 0 and 1 or True and False, associating these values with the meaning whether or not a tuple/list is considered, is more computationally light and memory-light than considering genomes as tuples of tuples.
It is interesting the fact that your mutation and your cross-over are not completely "random": you decided to mantain the parents and do not consider the offsprings if these are "worse" than the parents (instead of the classical operations in which you do not now if the offsprings are "better" or "worse" than the parents for the resolution of the problem). I think that is a good idea and this is confirmed by the results that are competitive compared to those seen in class. You could try, to further improve the solutions, to mantain the logic of these operations and give as input a population with genomes with the vast majority of lists/tuples not considered (set to 0, using the bitmap), because, as also discussed during lectures, in this problem, the relationship between the considered and not considered tuples is foundamental (problem already partially addressed by your mutation and cross-over).
I think that your fitness is good for the sorting of the gemomes that are solutions, because you choose the shoortest solutions first, but maybe the sorting of the genomes that are not solution could be improved (problem already partially addressed by your mutation and cross-over): you give the same weight to a genome that lacks only one number to be a solution and to a genome that lacks a lot of numbers to be a solution; I think that categorizing a genome that is near to be a solution and a genome that is very far from being a solution in the same way is not optimum. You could sort the genomes by how many numbers they are missing to be solutions and, for genomes with the same missing numbers, by length.
The choice of the parameters POPULATION and OFFSPRING could lead to very different results, but there is not an optimal way to choose them, except on the basis of the obtained results (trial and error); so I appreciate your logic to vary these parameters with respect to N.

Peer Review

  1. In the result function you do not filter the elements as specified in the above comment.
  2. in the goal_test function I would suggest to give as input a key by which you sort the list to the "sorted()" function.
  3. Sorting the single elements of the list of lists that you use as initial state is not worth in computational cost terms because I think that this operation does not guarantee you better performances.
  4. Using as input_state a given State and not the null one should make the performance worse, because you may cut the graph with a probability to remove a better path than the one that you find in the end. This is the result we got from our code when we got the same idea.
  5. instead of using lists and set, numpy.ndarrays are better for efficiency purposes: python lists are hetereogeneous and non-contiguous in memory, on the other hand numpy.ndarrays are homogeneous and contiguous so each access is easier. Moreover lumpy package can parallelise computations. In the end the unpack of the different lists would be better computed with reshape and you can also use numpy.unique() in order to compute the unique values of an numpy.ndarray.
  6. In general breadth-first approach is not performing very well when you reach N = 20 so I would suggest to use an A* approach as written in the readme with a priority function which is the sum of the current state cost (state_cost[state] + the number of elements that you need to reach the solution) in order to give more importance to the path that satisfy this condition.

Overall for what concerns the style of the code, it is well written and human readable and the readme is also well organised. I noticed the "tweak": is_valid() function which is a good idea to compare with a set of unique values of the lists.

Peer Review Lab 3 s292634

General Overview

At first view the repository is well-ordered, easy to navigate throught python files, and that's important!
I really appreciated the accuracy in th explanations given in the readme, it helped a lot to understand the reasonings behind the code.

3.1 Fixed Rules

Strategy is okey, quite simple but efficient as the task asked

3.2 Genetic Algorithm

Tuning the parameters in different phases sounds worth because mainly important choices are at the last phases of the game, expecially with large nim values.
Also for the second strategy you proposed the learning is divided into different parts to tune better the parameters. Personally, having a look of the results, I do not undestand the reasoning of this strategy if the third strategy is proposed, but it's just an opinion. The third one is really good guys. I noticed you used XOR operation to implement it, I'm not sure it was permitted because helps a lot GA to learn similarly to a optimal nim-sum strategy... but seeing the results obtained also in this case I can only be impressed!

3.3 Min-Max

I found that leaving the old generic algorithm commented is always a good idea in order to better understand the changes done on the new proposals. The solution obtained is good so I can might state that's a well-working win-max. The pruning phase does not interferee with the result but speeds up the inference time.

3.4 Reinforced Learning

As you said in the readme, RL is hard to evaluate because of the mutability of the values. I did not know anything about bblais game and the implementing of RL on the ideas of another repo is laudable. I would rather try with more RL agent training phases versus himself, and a lighter penance if loses... but i's just a consideration.

Overall, again, congratulations for your work. Very very well done!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.