Coder Social home page Coder Social logo

pybamm-team / battbot Goto Github PK

View Code? Open in Web Editor NEW
20.0 5.0 7.0 1.98 MB

An automated Twitter Bot that Tweets random Battery Simulations and replies to requested Battery Simulations.

Home Page: https://twitter.com/battbot_

Python 86.71% Makefile 0.03% Jupyter Notebook 13.24% Procfile 0.03%
automated batteries hacktoberfest pybamm python simulation twitter-bot

battbot's Introduction

BattBot

Twitter PyBaMM Twitter bot codecov Open In Colab black_code_style

An automated Battery Bot that tweets random battery configuration plots in the form of a GIF with the help of PyBaMM. The bot focuses on comparing 2 or 3 different configurations and can also reply to a simulation request. All the random tweeted configurations are stored in data.txt and the latest tweeted configuration is stored in config.txt that can also be played with on Google Colab here. Some examples for the simulation requests through a tweet are available here - REQUEST_EXAMPLES.md.

Deployment (The CI/CD pipeline)

One half of the bot is deployed on GitHub Actions and the other half is deployed on Heroku. The GitHub Actions deployment tweets random configurations at 7 am and 7 pm UTC whereas the Heroku deployment runs the script every minute to look out for tweet requests.

GitHub Actions is also responsible for running the tests, updating the stored random configurations, and keeping the run-simulations notebook in working condition (by updating the latest tweeted configuration in config.txt). The last_seen_id (ID of the tweet to which the bot replied last) is also synced on scheduled runs to keep the Heroku and GitHub Actions deployment in sync with each other.

Once everything in the Continuous Integration (GitHub Actions) part of the pipeline passes, the Continuous Deployment phase starts and the bot is deployed on Heroku, where it is built again with the updated last_seen_id (Heroku does not store the locally updated files permanently because of its ephemeral filesystem, another reason to update last_seen_id using GitHub Actions).

The files that keep Heroku deployment running -

The file that keeps GitHub Actions deployment running -

Random Tweets

  • The random configuration is generated in config_generator.py, hence, all the new possible options in the future should be added in this file.
  • This configuration is passed down to random_plot_generator which then calls different scripts based on if degradation is present in the configuration.
  • Comparison plots (which are "model comparison" and "parameter comparison") with no degradation are generated in ComparisonGenerator
  • The GIFs are created in create_gif.py and are resized using resize_gif.py to make them suitable for Twitter API.
  • The GIFs are then finally tweeted out by the Tweet class.

Requested Tweets

  • The replying functionality is a hybrid of tweepy and inbuilt python libraries requests and requests_oauthlib.
  • The Reply class is responsible for reading the tweet requests and for creating a configuration from that tweet's text.
  • This configuration is then passed down to random_plot_generator which now simulates and solves it (as it does with the random configuration).

Uploading and Replying (Twitter API)

  • A GIF is uploaded and tweeted using the Upload class. It uploads a media file in chunks (keeping in mind Twitter API's time out) to a Twitter API's endpoint and then finally tweets it.
  • Tweet and Reply classes inherit the Upload class to tweet random and user-requested GIFs.

Tests and coverage

The tests and the workflows are designed in a way that protects the Twitter API keys from getting revealed. The tests that require Twitter API keys to run, never run on a PR coming from a forked repository, as that would expose the API keys and anyone with a malicious script on that PR will be able to print/store/read them. More information on the structure of the test directory can be found here.

The bot uses a lot of multiprocessing calls (using a custom Process class) which makes the code coverage a bit unusual. The sitecustomize.py and .coveragerc files make sure that coverage is used to run tests in subprocesses. Complete documentation for this can be found here.

The coverage report on a PR coming from a fork will show that the coverage goes down (even if everything is covered) as some tests won't run on that PR as described above.

Citing PyBaMM

If you use PyBaMM in your work, please cite the paper

Sulzer, V., Marquis, S. G., Timms, R., Robinson, M., & Chapman, S. J. (2021). Python Battery Mathematical Modelling (PyBaMM). Journal of Open Research Software, 9(1).

You can use the bibtex

@article{Sulzer2021,
  title = {{Python Battery Mathematical Modelling (PyBaMM)}},
  author = {Sulzer, Valentin and Marquis, Scott G. and Timms, Robert and Robinson, Martin and Chapman, S. Jon},
  doi = {10.5334/jors.309},
  journal = {Journal of Open Research Software},
  publisher = {Software Sustainability Institute},
  volume = {9},
  number = {1},
  pages = {14},
  year = {2021}
}

To cite papers relevant to your code, you can add the following -

pybamm.print_citations()

to the end of your script. This will print bibtex information to the terminal; passing a filename to print_citations will print the bibtex information to the specified file instead. A list of all citations can also be found in the citations file.

Contributing to BattBot

All contributions to this repository are welcome. You can go through our contribution guidelines to make the whole process smoother.

battbot's People

Contributors

brychelle avatar dependabot[bot] avatar github-actions[bot] avatar saransh-cpp avatar vaibhav-chopra-gt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

battbot's Issues

Error in `run-simulations` notebook with `degradation comparisons`

I encountered an error with the degradation comparisons while running the notebook for the upcoming workshop/training's recording.

Steps to reproduce

  1. Run the latest simulation (or if there is a new simulation now, run this one - https://github.com/pybamm-team/BattBot/blob/main/bot/data.txt#L119)

Error

2021-09-19 13:59:26,842 - [INFO] utils._init_num_threads(157): NumExpr defaulting to 2 threads.
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pybamm/util.py in __getitem__(self, key)
     89         try:
---> 90             return super().__getitem__(key)
     91         except KeyError:

KeyError: "Positive electrode Young's modulus [Pa]"

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
4 frames
KeyError: "'Positive electrode Young's modulus [Pa]' not found. Best matches are ['Positive electrode OCP [V]', 'Positive electrode thickness [m]', 'Positive electrode tortuosity']"

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pybamm/parameters/parameter_values.py in update(self, values, check_conflict, check_already_exists, path)
    297                         + "have a default value. ({}). If you are ".format(err.args[0])
    298                         + "sure you want to update this parameter, use "
--> 299                         + "param.update({{name: value}}, check_already_exists=False)"
    300                     )
    301             # if no conflicts, update, loading functions and data if they are specified

KeyError: "Cannot update parameter 'Positive electrode Young's modulus [Pa]' as it does not have a default value. ('Positive electrode Young's modulus [Pa]' not found. Best matches are ['Positive electrode OCP [V]', 'Positive electrode thickness [m]', 'Positive electrode tortuosity']). If you are sure you want to update this parameter, use param.update({{name: value}}, check_already_exists=False)"

Possible cause

The code, right now, manually updates the Mohtat2020 parameter sets which the notebook does not.

Solution

Updating the parameter sets as shown below should work -

def lico2_volume_change_Ai2020(sto):
    omega = pybamm.Parameter("Positive electrode partial molar volume [m3.mol-1]")
    c_p_max = pybamm.Parameter("Maximum concentration in positive electrode [mol.m-3]")
    t_change = omega * c_p_max * sto
    return t_change


def graphite_volume_change_Ai2020(sto):
    p1 = 145.907
    p2 = -681.229
    p3 = 1334.442
    p4 = -1415.710
    p5 = 873.906
    p6 = -312.528
    p7 = 60.641
    p8 = -5.706
    p9 = 0.386
    p10 = -4.966e-05
    t_change = (
        p1 * sto ** 9
        + p2 * sto ** 8
        + p3 * sto ** 7
        + p4 * sto ** 6
        + p5 * sto ** 5
        + p6 * sto ** 4
        + p7 * sto ** 3
        + p8 * sto ** 2
        + p9 * sto
        + p10
    )
    return t_change

params.update(
            {
                # mechanical properties
                "Positive electrode Poisson's ratio": 0.3,
                "Positive electrode Young's modulus [Pa]": 375e9,
                "Positive electrode reference concentration for free of deformation [mol.m-3]": 0,  # noqa
                "Positive electrode partial molar volume [m3.mol-1]": -7.28e-7,
                "Positive electrode volume change": lico2_volume_change_Ai2020,
                "Negative electrode volume change": graphite_volume_change_Ai2020,
                # Loss of active materials (LAM) model
                "Positive electrode LAM constant exponential term": 2,
                "Positive electrode critical stress [Pa]": 375e6,
                # mechanical properties
                "Negative electrode Poisson's ratio": 0.3,
                "Negative electrode Young's modulus [Pa]": 15e9,
                "Negative electrode reference concentration for free of deformation [mol.m-3]": 0,  # noqa
                "Negative electrode partial molar volume [m3.mol-1]": 3.1e-6,
                # Loss of active materials (LAM) model
                "Negative electrode LAM constant exponential term": 2,
                "Negative electrode critical stress [Pa]": 60e6,
                # Other
                "Cell thermal expansion coefficient [m.K-1]": 1.48e-6,
                "SEI kinetic rate constant [m.s-1]": 1e-15,
                "Positive electrode LAM constant propotional term": 1e-3,
                "Negative electrode LAM constant propotional term": 1e-3,
                "EC diffusivity [m2.s-1]": 2e-18,
            },
            check_already_exists=False,
        )

Update citation in README

Now that the PyBaMM paper has been published, can you update the citation details in the README? (see PyBaMM's updated README).

The formatted citation should read

Sulzer, V., Marquis, S. G., Timms, R., Robinson, M., & Chapman, S. J. (2021). Python Battery Mathematical Modelling (PyBaMM). Journal of Open Research Software, 9(1).

The Bibtex should read

@article{Sulzer2021,
  title = {{Python Battery Mathematical Modelling (PyBaMM)}},
  author = {Sulzer, Valentin and Marquis, Scott G. and Timms, Robert and Robinson, Martin and Chapman, S. Jon},
  doi = {10.5334/jors.309},
  journal = {Journal of Open Research Software},
  publisher = {Software Sustainability Institute},
  volume = {9},
  number = {1},
  pages = {14},
  year = {2021}
}

Bot Ceased Tweeting Daily Battery Simulations

Issue

The Bot has ceased tweeting daily battery simulations for the past two weeks.

Description

It seems that the Bot stopped functioning correctly after I posted comments on one of its tweets. I initially thought my comments might have somehow interfered with the Bot's operation. To resolve this, I deleted my comments, but unfortunately, this did not resolve the issue as the Bot is still not tweeting daily battery simulations.

Expected behavior

The Bot should be tweeting daily battery simulations irrespective of user interactions on its tweets.

I hope this powerful Bot will be back soon.

Check parameter_value_generator for functions

At the moment parameter_value_generator just returns param_value, but I think if the parameter is a function I think you want it to return that function scaled by param_value, i.e. FunctionLike(params[parameter], param_value). It looks like for functions it just returns a random number between 0.5 and 2 but maybe I am wrong?

Errors in `CONTRIBUTING.md`

  1. In Local installation, the directory name ("PyBaMM-Twitter-Bot") is wrong. It should be "BattBot".
  2. Below that, in the sentence "This will install all the dependencies in your local system including the develop branch of PyBaMM on which this bot is based.", it should be "latest version of PyBaMM".
  3. In Pre-commit checks, the second point should have an "OR" statement for users who don't have a Twitter Developer Account. The commands should be -
coverage run --concurrency=multiprocessing -m unittest discover test/without_keys -v
python -m unittests discover -v test/without_keys
  1. Add -v in all the test commands.
  2. Also it should be flake8 --max-line-length=89

GIF size > 15728640 bytes in some cases

2021-08-12 06:54:04,277 - [INFO] upload.post_request(199): 400
2021-08-12 06:54:04,277 - [INFO] upload.post_request(200): {"request":"\/1.1\/media\/upload.json","error":"File size exceeds 15728640 bytes."}
2021-08-12 06:54:04,277 - [INFO] upload.post_request(201): Twitter API internal error. Trying again in 5 minutes

This is the first case where I have seen this error.

Create a notebook to run simulations

Create a colab notebook to run the tweeted configurations -

Options I have in mind -

  1. Directly take data from GitHub Actions and somehow plug it in the notebook.
  2. Write the data down in a text file and then read it using a script and plug it in the notebook.
  3. Let people enter the data manually from the tweet if option 1 and 2 is not possible.

Some resources -
https://medium.com/analytics-vidhya/a-quick-workflow-for-google-colab-github-and-jupyter-notebooks-on-mac-ff5d004e01f
https://towardsdatascience.com/google-drive-google-colab-github-dont-just-read-do-it-5554d5824228

Remove plots

  1. Remove plot choices 0 and 2, keep the summary variables one (comment it out) because that will be modified to Tweet summary variable comparisons.
  2. Change choice from random int to a random string to make the code more readable.

Migrate away from `heroku`

Heroku is getting rid of their free tier. This is a placeholder issue for migrating the bot to another cloud platform.

Vary `"Current function [A]"` and `"Ambient temperature [K]"` in `"parameter comparison"`

You could also vary "Ambient temperature [K]" here as well as the param_to_vary. You could also vary "Current function [A]" (keeping it the same for all comparisons). E.g. you recently tweeted "Doyle-Fuller-Newman model with Chen2020 parameters varying 'Positive electrode exchange-current density [A.m-2]' for a 1.0 C discharge at 25.0°C" but you could also do "Doyle-Fuller-Newman model with Chen2020 parameters varying 'Positive electrode exchange-current density [A.m-2]' for a 2.0 C discharge at 35.0°C"

Happy to leave that for a separate PR though.

Originally posted by @rtimms in #48 (comment)

As discussed above, it would be nice to vary these parameters in every comparison. Varying both of them in constant discharge "parameter comparisons" and just "Ambient temperature [K]" with experiments in "parameter comparisons".

Add tweet status

Add a tweet status of the form -

Case 1: comparing models
One discharge:
Comparing {model1.name}, {model2.name}, and {model3.name} with {parameters} for a {x}C discharge at {y} degrees C
[Link to code]
Experiments:
Comparing {model1.name}, {model2.name}, and {model3.name} with {parameters} for the following experiment: [experiment]

Case 2: varying a parameter
{model.name} with {parameters.name} varying {parameter.name} for a {x}C discharge at {y} degrees C
[Link to code]
{model.name} with {parameters.name} varying {parameter.name} for the following experiment: [experiment]

The "Link to code" option will be added once #14 is closed.

Implement more comparisons

  1. No degradation, plot gif (t=0 to t=end), <=3 cycles:
  • SPM vs SPMe vs DFN, same parameters
  • One model, change one parameter by +- 10% like in the notebook I sent
  1. With degradation, plot summary variables:
  • One model, one degradation mode, change one degradation parameter by +- 10%
  • One model, compare degradation modes - changing a single one at a time (e.g. with or without SEI porosity change, keep the same particle mechanics and sei model and plating model)

Mentioned in #5

Add `termination="80% capacity"` to tweet text

Description

Right now all the experiments are stopped at 80% capacity but this is conveyed through the tweet text. Add it to to the tweet text and make sure that the text does not exceed Twitter's specified character limit.

While testing, killing a simulation leaves behind the generated plots

Issue

Right now, while testing, the code waits only for 10 minutes for a simulation to complete (while tweeting, this time limit is 20 minutes) to make the tests quicker but sometimes the simulation is canceled midway and the leftover generated plots are never deleted.

Fix

As I don't want the tests to go on for even longer (they already run for about an hour right now), it would be good to programmatically delete these images rather than increasing the time out.

Labels come out wrong while varying `Functional Parameters`

Reference tweet

Description

The label does not contain the factor's (by which the function is being scaled) value. The label values are coming from the default __str__ method of FunctionLike class.

Possible fix

Adding a custom __str__ method that returns the factor (by which the function is being scaled) should work.

Twitter API error "code": 130

Description

Twitter API is sometimes on "over capacity" (because of a lot of requests from the developers around the world or due to a lot of requests from the bot at once) and hence rejects the post requests being sent to it. This is causing the tests to fail and might even someday cause the Tweet functionality to fail with no issues/bugs in the code.

Error

503
{"errors":[{"message":"Over capacity","code":130}]}

The failed workflow - https://github.com/Saransh-cpp/PyBaMM-Twitter-Bot/pull/10/checks?check_run_id=2840127936

Possible fix

Use an if-else block inside an infinite loop that runs every 5 minutes (to make sure that we don't bombard the Twitter API with post requests) and breaks only when the status code of the request object is between 200 and 299.

Speed up the tests

Description

Right now the tests take more than an hour to run, which is due to the fact that every GIF that is created while testing is created using 80 images. Once #83 is closed, the tests can be changed to generate only 3 images for a single GIF.

The bot replies every time CI passes

Reference tweet

I think that the bot is replying to tweets at every scheduled GitHub Actions run.

Suspected reason

  • The last_seen_id.txt on GitHub is not being updated with the actual last seen ID.
  • Whenever the CI passes on main (scheduled run), the code is automatically delivered to Heroku (with the old and static last_seen_id.txt) where it is deployed again.
  • After deployment, when the script reads last_seen_id.txt, it finds an old tweet ID (which will never change) and starts reading tweets from that particular tweet ID.

Possible solution

  1. Add a script to update the last_seen_id.txt, similar to the one which updates config.txt and data.txt.

This might create some problems at 7, 19 UTC. Let's say that the bot is about to reply to a tweet with ID = x but, at that instance, the actions run and update everything (adding x as the last seen ID), now the reply process will be canceled and the bot will be built and deployed again with the new code on Heroku. Now when the reply script runs again on Heroku, the bot will start scanning tweets with ID > x, hence the tweet with ID = x will never get a reply (might be rare but definitely a possibility).

For this reason, maybe we should manually deploy the bot? Keep updating the last_seen_id.txt with GH Actions but manually deploy the bot after a bug fix or a feature addition?

Running `pybamm.print_citations()` in the notebook gives an error

image

Full trace -

---------------------------------------------------------------------------
PluginNotFound                            Traceback (most recent call last)
<ipython-input-12-e8d18fc260ed> in <module>()
----> 1 pybamm.print_citations()

6 frames
/usr/local/lib/python3.7/dist-packages/pybamm/citations.py in print_citations(filename, output_format)
    112 def print_citations(filename=None, output_format="text"):
    113     """ See :meth:`Citations.print` """
--> 114     pybamm.citations.print(filename, output_format)
    115 
    116 

/usr/local/lib/python3.7/dist-packages/pybamm/citations.py in print(self, filename, output_format)
     92                 "plain",
     93                 citations=self._papers_to_cite,
---> 94                 output_backend="plaintext",
     95             )
     96         elif output_format == "bibtex":

/usr/local/lib/python3.7/dist-packages/pybtex/__init__.py in format_from_file(*args, **kwargs)
    180 def format_from_file(*args, **kwargs):
    181     """A convenience function that calls :py:meth:`.PybtexEngine.format_from_file`."""
--> 182     return PybtexEngine().format_from_file(*args, **kwargs)
    183 
    184 

/usr/local/lib/python3.7/dist-packages/pybtex/__init__.py in format_from_file(self, filename, *args, **kwargs)
     90         :py:meth:`~.Engine.format_from_files`.
     91         """
---> 92         return self.format_from_files([filename], *args, **kwargs)
     93 
     94     def format_from_files(*args, **kwargs):

/usr/local/lib/python3.7/dist-packages/pybtex/__init__.py in format_from_files(self, bib_files_or_filenames, style, citations, bib_format, bib_encoding, output_backend, output_encoding, min_crossrefs, output_filename, add_output_suffix, **kwargs)
    147         from pybtex.plugin import find_plugin
    148 
--> 149         bib_parser = find_plugin('pybtex.database.input', bib_format)
    150         bib_data = bib_parser(
    151             encoding=bib_encoding,

/usr/local/lib/python3.7/dist-packages/pybtex/plugin/__init__.py in find_plugin(plugin_group, name, filename)
    109         return _load_entry_point(plugin_group + '.suffixes', suffix)
    110     else:
--> 111         return _load_entry_point(plugin_group, _DEFAULT_PLUGINS[plugin_group])
    112 
    113 

/usr/local/lib/python3.7/dist-packages/pybtex/plugin/__init__.py in _load_entry_point(group, name, use_aliases)
     79         for entry_point in pkg_resources.iter_entry_points(search_group, name):
     80             return entry_point.load()
---> 81     raise PluginNotFound(group, name)
     82 
     83 

PluginNotFound: plugin pybtex.database.input.bibtex not found

Make the legend placement dynamic

Tweet

https://twitter.com/battbot_/status/1450010838903951361/photo/1

Issue

Right now the legend is placed in a fixed location which sometimes makes the text pop out.

How it should be

The legend should be either placed dynamically on a spot (like they are done in PyBaMM) or they should be placed inside the frame (or the last subplot) itself (like they are done in pybamm.plot_voltage_components and pybamm.plot_summary_variables) if for some reason the first option isn't possible.

image

Fix the README logo

The logo suddenly broke up into pieces. I think it would be better if we combine the 2 images to create a logo?

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.