Comments (10)
I like the UX of Syncthing. At first start, you are asked for consent to collect usage data with a preview of that data.
There is also a public page with a great visualization of the data: https://data.syncthing.net/
But don't make the mistakes Muse Group did with Audacity like not asking for consent and using Google/Yandex for analytics.
audacity/audacity#835
audacity/audacity#889
I'm not sure if OpenTelemetry would be a usable tool for that.
from woodpecker.
Maybe we can also add some popup shown to an admin on the first login asking if he wants to send usage tracking and generate the tracking id after that
from woodpecker.
Thanks for volunteering!
The full set of data that we are going to transfer is
- one part vanity metrics to have things to celebrate. Like how many people run Woodpecker, with what kind of build volume, etc
- and one part data to support product decisions: version control systems used, or if a certain feature is used etc. Woodpecker needs to form an identity over time and we need to focus efforts on things that are used, or things that are strategic. And we need data to evaluate the strategic decisions.
This issue will be updated with the initial set of metrics, and I pledge to keep the list transparent at all times.
from woodpecker.
we need some server that do collect it & verify it's a legit request ... by do a callback and see if site exist?
That would require us to "collect" the public address of the instance, but we don't want that as it should be anonymous I guess and it would require them to be public.
I recently had a a look at Grafana and InfluxDB for this.
I think we should simply create a small GO server which takes HTTP requests and inserts that data into a connected db like InfluxDB. I would create an anonymous id at the first start of a server and save it to the server database. This id (do we even need that 🤔) would be used to send a request every x hours to our server, which simply adds an entry to the database (maybe directly aggregating it in the long term).
If we want to "protect" against abuse we could add an ip based limit, like you can only create a tracking id 10 times a day and each tracking id is only allowed to report data every x hours. Somehow like letsencrypt does it.
from woodpecker.
I think it would be still pretty helpful to get more insights about our users. For example we always have to consider if we need to add options on the repo level or the instance level. A user with an instance running on a PI is totally fine updating pipline configs or env vars, for larger companies or communities like codeberg that's often not possible to force all users to a specific way directly and needs a completely different update approach where features like config versions could help. Or things like quotas / user-provided agents would be really helpful for large instance, but probably no pi user cares about limits. We for sure have to provide both, but insides could help us to focus here. We could even do specific things like counting the amount of repos having pipeline option x set, allowing us to decide on actual data if we drop that option or have to keep it / provide an alternative.
I however expect the issue that quite a lot of users of the community are against any kind of tracking wherever it be totally public or not, which could make it pretty unreliable for us.
Using surveys isn't an option for me. Just think if you would want to fill it out yourself or take the who is using wp discussion as a reference.
from woodpecker.
Which data should be transmitted?
If you need a tester, then I am happy to help.
from woodpecker.
Home Assistant is doing a similar thing. This could be a good starting point:
https://www.home-assistant.io/integrations/analytics
https://github.com/home-assistant/analytics.home-assistant.io
or Octoprint
https://github.com/OctoPrint/OctoPrint/blob/027d8f8069b86a7f5e8c185a2d8f294b631c2f08/src/octoprint/plugins/tracking/__init__.py
https://tracking.octoprint.org/
https://data.octoprint.org/
from woodpecker.
Data which would be interesting to collect:
every 24h (first on start)
- version
- users counter
- active repo counter
- used forge
- activated features?
- executed pipelines counter
- total pipeline execution time
- connected agents counter
- used agent backends
- server and agent OSes
from woodpecker.
we need some server that do collect it & verify it's a legit request ... by do a callback and see if site exist?
from woodpecker.
I'd like to ask some things about this again:
There's https://github.com/woodpecker-ci/analytics without real activity - do we still want to add usage tracking?
To be honest, I don't really see a value in it.
From @anbraten's comment about the data that should be sent:
version
users counter
active repo counter
executed pipelines counter
total pipeline execution time
connected agents counter
I don't really see how these can be used to improve development.
used forge
activated features?
used agent backends
server and agent OSes
For these, I can see a value, but this is data that should only be sent once. We can easily do a poll to find out how many users use which backend, which os etc.
from woodpecker.
Related Issues (20)
- Document "when and what to contribute" HOT 1
- Document how the workflow is parsed
- Woodpecker daily cron builds get stuck
- Show linter errors as highlighted text in UI
- WebUI does not display Oauth2 login error
- Refactoring: use DAG for steps in backend. HOT 1
- Agent configuration WOODPECKER_FILTER_LABELS= should not fail
- Improve cli
- Setting WOODPECKER_DATABASE_DATASOURCE_FILE does not work as expected when using docker image HOT 12
- Gitea Webhook - https://git-repo.local/api/v1/version: context deadline exceeded HOT 1
- Restarting pipelines does not work
- Linter should not report warning for missing when in steps if global when is set HOT 2
- Add custom timeout and retrys to gitea/forgejo client HOT 4
- Volume Mount error: name is not a valid kubernetes dns name HOT 3
- Gitlab user email, name and avatar of
- Add new PR event that is triggered on any changes to a PR. HOT 1
- Server panic on manual run & push
- Pipeline stuck in pending status
- No logs when cannot connect to database
- matrix workflows set up environment in wrong way
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from woodpecker.