Coder Social home page Coder Social logo

Reinforcement Learning about convnetsharp HOT 49 OPEN

cbovar avatar cbovar commented on August 21, 2024
Reinforcement Learning

from convnetsharp.

Comments (49)

MarcoMeter avatar MarcoMeter commented on August 21, 2024 1

I'm applying DQN to my game BRO ( https://www.youtube.com/watch?v=_mZaGTGn96Y ) right now.
Within the next months , I'll release BRO open source on Github. BRO features an AI framework and a match sequence editor for match automation. The game is done with Unity.

Right now I need a much faster DQN implementation. The DQN Demo from above lacks in that, so the training time takes 30 minutes. That's why I'm considering to contribute DQN to this repo.

And this is a video about the AI framework and the match sequence editor https://www.youtube.com/watch?v=EE7EqoaOL34

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

+1

Shouldn't be that tough to implement DQN. Maybe I can contribute that in like 8 weeks. Though, I haven't checked out if ConvNetSharp is suitable for my implementation in Unity performance wise.

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

For DQN you can check out this repo. It should be easy to adapt it to newer version of ConvNetSharp.

I have worked on LSTM. I will eventually release a 'shakespeare' demo. I only worked on the GPU versions.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

I also see a DQN using WPF for display in this fork

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

I worked with Deep-QLearning-Demo the past weeks. But it lacks in performance (due to single threaded) and that code is hard to read and to maintain. But well, this was almost completely adapted from the ConvNetJS version, which uses that strange coding convention.

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Hey,

if anybody has some ideas for testing the to be implemented DQN algorithm, please let me know.

So far I've got these ideas for integration testing:

  • ContNetJs's apple and poison example (windows forms), just like the already mentioned C# port (Deep-QLearning-Demo-csharp)
  • slot machine (just console application)
  • a moving target, which has to be shot by the agent (maybe Unity game engine)
  • agent has to move a basket to catch fruits and to avoid stuff like poison (maybe Unity game engine)

I'll probably find more through research.

After that, I'll start with the DQN implementation. I'll probably start with the implementation of "Deep-QLearning-Demo-csharp". Then I'll compare it to the Python implementation done by DeepMind for the Atari games.

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

Maybe you could also try on a very simple task, reproduce the input:

  • 0 -> 0
  • 1 -> 1

It may fit in an unit test.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

slotmachine
I wrote a simple slot machine (console app) using 3 reels. Just hold down space to start the slot machine and to stop each reel one by one.

New items for the reels' slots are sampled from a specific probability distribution.

In the end, the agent has to decide when to stop the first reel, the second reel and finally the third one.
(I should consider to let the AI decide which reel to stop first to have a few more dimensions on the outputs)

SlotMachine.zip

Given this slot machine example, I'm going to approach the DQN implementation now.

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Creating an interface between Python and C# might end up consuming too much time. I know there is the so called IronPython (http://ironpython.net/) Library which allows to use Python in C#, but I haven't really looked into it.

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Here is an update on the progress referring to a commit on the DQN branch of my fork:

Added major chunks of the DeepQLearner.cs [WIP]
A few TODOs left before testing and verification:

  • TODO: Overload or modify RetrievePolicy() to make use of Volumes, return output Volume from the net as well
  • TODO: Overload or modify GetNetInput() to make use of Volumes
  • TODO: Compute loss
  • TODO: Verify the consistency of the composed neural net upon initializing the DeepQLearner

https://github.com/MarcoMeter/ConvNetSharp/commit/5711468362d6f3551f82bad1e24d784e31f59a4b

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

And there is one more major thing on the list:

Adding a regression layer. I guess there is no regression layer implemented yet, right?

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

It seems that RegressionLayer disappeared at some point (from tag 0.3.2). I will try to reintroduce it this week end.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Maybe this is related to this commit, because the file 'F:\Repositories\ConvNetSharp\src\ConvNetSharp\Layers\RegressionLayer.cs' got removed:

Commit: 56fec45 [56fec45]
Parents: 5a47e2e, 37cdfbf
Author: Augustin Juricic [email protected]
Date: Dienstag, 28. März 2017 11:18:18
Committer: Augustin Juricic
Merge remote-tracking branch 'github/master' into develop

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

I think I have never implemented RegressionLayer since ConvNetSharp handles batch.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

RegressionLayer committed

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Great, thanks. I'll move on soon.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

As of now, I'm struggling with the issue that the computed action values grow exponentially towards positive or negative infinity.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

Have you tried with a lower learning rate? E.g. 0.001

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

The learning rate slightly delays this outcome.

Nevertheless, for the outputs I'm expecting values to be less than 2. Just because of the fact that the maximum reward for the slot machine example is 1, which is probably handed out after making at least 3 decisions.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

I'm still trying to figure out the issue. Maybe I'm misusing the volume class, or I might don't have enough experience with the actual implementation of neural nets (like understanding every single detail of the regression layer implementation). So I'm dropping some more information.

Here is some pseudo code (Matiisen, 2015) featuring the core pieces of the algorithm:

`initialize replay memory D
initialize action-value function Q with random weights
observe initial state s
repeat
select an action a
with probability ε select a random action
otherwise select a = argmaxa’Q(s,a’)
carry out action a
observe reward r and new state s’
store experience <s, a, r, s’> in replay memory D

sample random transitions <ss, aa, rr, ss’> from replay memory D
calculate target for each minibatch transition
    if ss’ is terminal state then tt = rr
    otherwise tt = rr + γmaxa’Q(ss’, aa’)
train the Q network using (tt - Q(ss, aa))^2 as loss

s = s'

until terminated`

And this is the stated loss function for training:

lossfunction

For the DQN implementation of Karpathy this loss function seems to be not present. The regression layer implementation looks to be similar (comparing Karpathy and this repo). For the rest, everything is implemented accordingly (i.e. sampling experiences for computing new q values).

Using the Deep Q Learning Demo CSharp, the output values for the slot machine stay below 0.02.
SlotMachine.zip

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

And this is a flow chart of the implementation of the Q Learning part
brainforwardbackward

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

I haven't had time to look at the code yet. But you could maybe make the problem even simpler (like this) to make it easier to debug.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

I could implement an example for contextual bandits according to Bandit Dungeon Demo (Example form the same author of your provided link).

I just fear that the bandit examples are not complex enough for using a policy network. At least it can be observed, if the q-values grow to infinity or not.

from convnetsharp.

srini1948 avatar srini1948 commented on August 21, 2024

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

The only news I have is that I'm working on a different example (made with Unity). This example is about controlling a basket to catch rewarding items and to avoid punishing ones.

newexmaple

Concerning the DQN implementation I'm still stuck. I hope that Cedric can find some time to check the usage of Volumes.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

Sorry guys. I have been very busy with my new job. I'll try to look at this soon.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

I just tested the implementation on the apples & poison example. The issue of exploding output values is observed as well.

I didn't add the example to the version control, since the code is not well written but functional (I took the known implementation and just substituted the DQN parts).

ApplesPoisonDQNDemo.zip

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Just some update:

I created a UI for displaying the training progress. The red graph plots the average reward and the blue one the average loss. I resolved a bug concerning the epsilon exploration strategy (epsilon was always equal to 1 due to an integer division).

gui

As Cederic fixed a bug of the regression layer, the outputs do not explode anymore. Regardless, I did not achieve a good behavior for the slot machine yet. Though I came up with a new reward function, which signals rewards based on the result of one stopped reel. The first stop rewards the agent by the item of slot (e.g. 0.5 for a cherry or 1 for a 7). Stopping the second or third reel rewards the agent by 1 for a matching item. For failure, the agent is punished by -0.5. Waiting does not punish or reward the agent. Most of the time the agent learns to wait. It seems to be that this way, any punishments are avoided.

I probably try to focus now on the Apples and Poison demo, because suitable hyperparemeters are already known. One drawback is the performance. The referenced demo performs much better. So I'll have to find the bottleneck.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

I think you should focus on getting the correct results first. For the performances, we can look at it later (using batch size > 1 and GPU will help)

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Still it surprises me that the Apples and Poison demo is much much slower compared to Deep-QLearning-Demo-csharp.

performanceprofile

Edit 1: If I'm enabling the GPU support, by changing the namespaces, I get a BadImageFormatException because it can not load the ConvNetSharp.Volume.GPU. Even though it is added to the references to all project dependencies.

Edit 2: The Apples and Poison demo takes probably a whole day for training. It progresses on like 4fps.

Edit 3: 240,000 learning steps (DeepQLearner.Backward) take 27h. In comparison to Deep-QLearning-Demo-csharp, 50.000 learning steps take less than 9 minutes.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

You probably get BadImageFormatException because you are in 32bits. GPU only works in 64bits.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Thanks, this solved the BadImageFormatException.

And now its a CudaException thrown at CudaHostMemoryRegion.cs:25, triggered by
var chosenAction = _brain.Forward(new ConvNetSharp.Volume.GPU.Double.Volume(GatherInput(), new Shape(GatherInput().Length)));

One question:
Is there any way to avoid specifying the full path to the Volume object, like seen above? VS complains about that Volume is a namespace even though the namespace is imported. The ConvNetSharp.Volume namespace is required for the Shape class, so I guess that's the conflict.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

I have fixed the loss computation in the regression layer.

I think there is an issue here. You get the output of the FinalState and update the reward related to current Action. However you should get the output related to the InitialState.

In ConvNetJs, it only regress on the current Action dimension here.

You could do something like that:

// Create desired output volume
var desiredOutputVolume = _trainingOptions.Net.Forward(experience.InitialState).Clone();
desiredOutputVolume.Set(actionPolicy.Action, newActionValue);

I applied this modification on this branch: https://github.com/cbovar/ConvNetSharp/tree/DQN

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

It looks that you are right on that. I missed out on that detail inside the train function.

from convnetsharp.

cbovar avatar cbovar commented on August 21, 2024

As for the exception in GPU (CudaException thrown at CudaHostMemoryRegion.cs:25), it turns out it's a multi-threading issue: some volume allocation is done in the workerthread whereas the GPU context was acquired on the main thread.

from convnetsharp.

masatoyamada1973 avatar masatoyamada1973 commented on August 21, 2024

desiredOutputVolume.Set(actionPolicy.Action, newActionValue)

desiredOutputVolume.Set(experience.Action, newActionValue)

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

Hey,
I wanted to let you guys know that I stopped working on this concern.
I switched to working with Python and the just released ML Agents of Unity.

from convnetsharp.

GospodinNoob avatar GospodinNoob commented on August 21, 2024

@MarcoMeter Hello. Link (https://github.com/MarcoMeter/Basket-Catch-Deep-Reinforcement-Learning) is broken. Is there an opportunity to download the source code of this Unity implementation (Unity project)? Thanks.

from convnetsharp.

MarcoMeter avatar MarcoMeter commented on August 21, 2024

@GospodinNoob
https://github.com/MarcoMeter/Unity-ML-Environments

from convnetsharp.

GospodinNoob avatar GospodinNoob commented on August 21, 2024

@MarcoMeter Thanks

from convnetsharp.

GospodinNoob avatar GospodinNoob commented on August 21, 2024

@MarcoMeter Maybe you have a repo with Unity and your DQN? I am trying to add it, but still have some misunderstanding of this system. Of course, if it not hard to you)

from convnetsharp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.