Comments (4)
Before talking about the model requirements, let me first describe the different ways we can train the system.
- Custom project training: Train on individual projects, and apply on the same project. (one to one)
- General project training: Train on a set of one or more projects, apply on a different set of one or more projects.
Custom Project Training:
- Gives the best result.
- Captures the idiosyncrasies of of individual projects.
- Requires a lot of training data for each individual project.
- Does not generalize well to other projects.
- Requires more time to implement on a new project.
General Project Training:
- Results not as good as custom training.
- Captures general attributes of each project, which maybe common with other projects.
- Training data from multiple projects are combined together.
- Generalizes well to other projects.
- Requires less time to implement on a new project.
Model Requirements
The performance of the ML/DL models have 2 preconditions:
- Issue(Bug) Count: the number of samples available for training.
- Negative/Positive ratio: The ratio of negative samples (0 labels) to positive samples (1 labels). We also call it False Positive/True Positive Ratio.
Issue Count:
- Generally, the more samples available the better.
- We have tried our models on projects with approximately 10k samples and they have given good results.
- The best results were for Libtiff which had 12,500 samples.
Negative/Positive Ratio:
- The closer the ration is to 1/1, the better. This will be called a balanced dataset.
- For the bug/vulnerability detection problem, there are way more examples of non-buggy code than buggy code (luckily) which is why the dataset is almost always heavily unbalanced.
- The more unbalanced the data, the worse the results.
- Our best results are for Libtiff which has a ratio of 20/1.
- We have obtained good results for datasets with ratio of up to 54/1.
Comments:
-
If the issue count is less, custom project training option is unavailable to us and we will have to rely on general project training. This is the case with grep(2,441 issues), crun(3513 issues) and fuse-overlayfs(727 issues).
-
We get poor results when the issue count is very high and the negative/positive ratio is also very high. The worst results are for FFMpeg which has about 500,000 (check number) examples and a N/P ratio of 120/1. In such cases we can restrict the bug types under consideration to improve the results.
-
Another thing to note is that we can know both these numbers only after analyzing a project with a static analyzer and the D2A auto-labeler.
from varangian.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
from varangian.
/close
from varangian.
@saurabhpujar: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from varangian.
Related Issues (20)
- Tasks List for first run of monoloth prototype HOT 9
- Bot HOT 7
- MVP: prototype run of training, prediction and issue creating on new code HOT 7
- Generate auto-labeler data for HOT 5
- Updates to the bot HOT 5
- Failed to update dependencies to their latest version HOT 1
- Add documentation in support of Bot Issue features HOT 4
- Tasks List for Libtiff pipeline HOT 4
- Communicate with Libtiff maintainers HOT 5
- Failed to update dependencies to their latest version HOT 5
- Failed to update dependencies to their latest version HOT 5
- OKR and Tasks for Inference Scale Out (Milestone 3) HOT 4
- Generate Cross dataset baselines for models HOT 3
- Improve Classical ML model performance with Feature Engineering HOT 4
- C-BERT: get results on time based splits (v2) HOT 2
- C-BERT: Run on single Libtiff commit for manual analysis HOT 5
- C-BERT: get results for D2A V2 balanced splits HOT 1
- DX Sample Issue: Analyze Libtiff with Varangian HOT 6
- Finalize System Architecture: Offline Mode HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from varangian.