Coder Social home page Coder Social logo

googlecloudplatform / appengine-sidecars-docker Goto Github PK

View Code? Open in Web Editor NEW
72.0 45.0 44.0 1.19 MB

A set of services that run along side of your Google App Engine Flexible VM application containers. Each service runs inside of its own docker container along with your application's source code.

Home Page: https://cloud.google.com/appengine/docs/flexible/

License: Apache License 2.0

Shell 10.51% Go 81.19% Dockerfile 3.35% Starlark 4.45% Pawn 0.50%

appengine-sidecars-docker's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

appengine-sidecars-docker's Issues

Switch from glog to zap.Logger in Opentelemetry Collector

This issue specifically aims at dockerstats receiver.

Zap logger is already linked in and passed on to the receiver factory. We can plumb it all the way through to the scraper used by the receiverr and use it instead of linking glog as an additional dependency.

OpenTelemetry Collector is disabled

In my app engine logs, I am consistently seeing:

{ "_BOOT_ID": "10093827d09e4a8aa3348732145fb1b7", "MESSAGE": "Oct 30 02:32:50 OpenTelemetry Collector is disabled", "_COMM": "bash", "_SYSTEMD_CGROUP": "/system.slice/flex-opentelemetry-collector.service", "_SYSTEMD_UNIT": "flex-opentelemetry-collector.service", "_UID": "0", "_STREAM_ID": "4d64bd04e6024d029c227bb0ab50201e", "_HOSTNAME": "aef-default-20201023t040523-szr2", "_SYSTEMD_INVOCATION_ID": "54b69d0fc5fb4945a9025b45ba344c05", "_SYSTEMD_SLICE": "system.slice", "SYSLOG_FACILITY": "3", "_GID": "0", "SYSLOG_IDENTIFIER": "bash", "PRIORITY": "6", "_TRANSPORT": "stdout", "_CAP_EFFECTIVE": "3fffffffff", "_PID": "1405225", "_MACHINE_ID": "5b2d6759ad59fef83741f10909893c3e" }
{
insertId: "ojqbqx7a5ppwktn2n"
jsonPayload: {18}
resource: {2}
timestamp: "2020-10-30T02:32:50.416989Z"
labels: {4}
logName: "projects/[REDACTED]/logs/appengine.googleapis.com%2Fopentelemetry_collector"
receiveTimestamp: "2020-10-30T02:32:51.956998535Z"
}
Default
2020-10-30 12:32:50.418 DDUT
{ "_HOSTNAME": "aef-default-20201023t040523-szr2", "_SYSTEMD_UNIT": "flex-opentelemetry-collector.service", "PRIORITY": "5", "_SYSTEMD_CGROUP": "/system.slice/flex-opentelemetry-collector.service", "SYSLOG_IDENTIFIER": "vm_runtime_init", "_GID": "0", "_TRANSPORT": "syslog", "MESSAGE": "Oct 30 02:32:50 OpenTelemetry Collector is disabled", "SYSLOG_FACILITY": "1", "_SYSTEMD_SLICE": "system.slice", "_BOOT_ID": "10093827d09e4a8aa3348732145fb1b7", "_MACHINE_ID": "5b2d6759ad59fef83741f10909893c3e", "_CAP_EFFECTIVE": "3fffffffff", "_COMM": "logger", "_UID": "0", "_SYSTEMD_INVOCATION_ID": "54b69d0fc5fb4945a9025b45ba344c05", "_SOURCE_REALTIME_TIMESTAMP": "1604025170418393", "_PID": "1405236" }
Default
2020-10-30 12:32:50.418 DDUT
{ "_SYSTEMD_SLICE": "system.slice", "_SYSTEMD_INVOCATION_ID": "54b69d0fc5fb4945a9025b45ba344c05", "_BOOT_ID": "10093827d09e4a8aa3348732145fb1b7", "SYSLOG_FACILITY": "3", "_PID": "1405225", "_CAP_EFFECTIVE": "3fffffffff", "PRIORITY": "6", "_TRANSPORT": "stdout", "MESSAGE": "<13>Oct 30 02:32:50 vm_runtime_init: Oct 30 02:32:50 OpenTelemetry Collector is disabled", "SYSLOG_IDENTIFIER": "bash", "_GID": "0", "_UID": "0", "_HOSTNAME": "aef-default-20201023t040523-szr2", "_SYSTEMD_CGROUP": "/system.slice/flex-opentelemetry-collector.service", "_COMM": "bash", "_MACHINE_ID": "5b2d6759ad59fef83741f10909893c3e", "_SYSTEMD_UNIT": "flex-opentelemetry-collector.service", "_STREAM_ID": "4d64bd04e6024d029c227bb0ab50201e" }
Default
2020-10-30 12:32:50.430 DDUT
{ "_SYSTEMD_UNIT": "flex-opentelemetry-collector.service", "SYSLOG_FACILITY": "3", "_HOSTNAME": "aef-default-20201023t040523-szr2", "_EXE": "/bin/bash", "SYSLOG_IDENTIFIER": "bash", "_MACHINE_ID": "5b2d6759ad59fef83741f10909893c3e", "_CAP_EFFECTIVE": "3fffffffff", "_CMDLINE": "/bin/bash /var/lib/flex/vm_runtime/vm_opentelemetry_collector.sh stop", "PRIORITY": "6", "_PID": "1405238", "_COMM": "bash", "_SYSTEMD_CGROUP": "/system.slice/flex-opentelemetry-collector.service", "_UID": "0", "_STREAM_ID": "db1e5ee0211f4aa492f3d62713ff89a9", "_TRANSPORT": "stdout", "MESSAGE": "Oct 30 02:32:50 OpenTelemetry Collector is disabled", "_SYSTEMD_INVOCATION_ID": "54b69d0fc5fb4945a9025b45ba344c05", "_GID": "0", "_BOOT_ID": "10093827d09e4a8aa3348732145fb1b7", "_SYSTEMD_SLICE": "system.slice" }
Default
2020-10-30 12:32:50.430 DDUT
{ "_GID": "0", "_CAP_EFFECTIVE": "3fffffffff", "_COMM": "bash", "_SYSTEMD_INVOCATION_ID": "54b69d0fc5fb4945a9025b45ba344c05", "_MACHINE_ID": "5b2d6759ad59fef83741f10909893c3e", "_STREAM_ID": "db1e5ee0211f4aa492f3d62713ff89a9", "_SYSTEMD_SLICE": "system.slice", "_UID": "0", "SYSLOG_IDENTIFIER": "bash", "_TRANSPORT": "stdout", "_BOOT_ID": "10093827d09e4a8aa3348732145fb1b7", "MESSAGE": "<13>Oct 30 02:32:50 vm_runtime_init: Oct 30 02:32:50 OpenTelemetry Collector is disabled", "PRIORITY": "6", "_CMDLINE": "/bin/bash /var/lib/flex/vm_runtime/vm_opentelemetry_collector.sh stop", "_HOSTNAME": "aef-default-20201023t040523-szr2", "_PID": "1405238", "_SYSTEMD_CGROUP": "/system.slice/flex-opentelemetry-collector.service", "_EXE": "/bin/bash", "SYSLOG_FACILITY": "3", "_SYSTEMD_UNIT": "flex-opentelemetry-collector.service" }
Default
2020-10-30 12:32:50.430 DDUT
{ "_BOOT_ID": "10093827d09e4a8aa3348732145fb1b7", "SYSLOG_FACILITY": "1", "_SYSTEMD_CGROUP": "/system.slice/flex-opentelemetry-collector.service", "_CAP_EFFECTIVE": "3fffffffff", "_UID": "0", "MESSAGE": "Oct 30 02:32:50 OpenTelemetry Collector is disabled", "PRIORITY": "5", "_SYSTEMD_UNIT": "flex-opentelemetry-collector.service", "_HOSTNAME": "aef-default-20201023t040523-szr2", "_SYSTEMD_SLICE": "system.slice", "_SOURCE_REALTIME_TIMESTAMP": "1604025170430790", "_GID": "0", "_SYSTEMD_INVOCATION_ID": "54b69d0fc5fb4945a9025b45ba344c05", "_TRANSPORT": "syslog", "_COMM": "logger", "SYSLOG_IDENTIFIER": "vm_runtime_init", "_MACHINE_ID": "5b2d6759ad59fef83741f10909893c3e", "_PID": "1405249" }
Info
2020-10-30 12:32:50.525 DDUT

These log entries repeat for all my flex services, roughly once a second.

I have opened feedback on this with in GCP Console (and mentioning the incurred log cost), but maybe it is more relevant to here as a configuration issue with sidecars?

If this is something I do indeed have control over (as in I can enable it or something) (though I don't see how and haven't found any information on it), please point me in the right direction.

This began roughly on the 12th of October. After a new version was released.

Cheers!

fluentd_logger ruby process using 80% memory and causing instance to restart

Urgent

fluentd has been utilizing the majority of the instances allocated memory and starving our app of memory, until the fluentd process gets killed, or the instance restarts. I believe that this has been going on for about 2 months, before then we were not noticing this issue. There seems to be others experiencing the same issue.

This has been creating a noticeable slowdown for our customers and this is something we need to resolve ASAP. If anyone has any ideas on what could be causing this, that would be great.

Dec 17 21:17:59 aef-api--terminal-1--19--0-ck2r kernel: [141421.123211] Out of memory: Kill process 3407 (ruby) score 234 or sacrifice child
Dec 17 21:17:59 aef-api--terminal-1--19--0-ck2r kernel: [141421.130926] Killed process 3407 (ruby) total-vm:883508kB, anon-rss:239232kB, file-rss:0kB undefined

https://stackoverflow.com/questions/47844344/how-to-limit-the-fluentd-logger-memory-usage-in-a-google-app-engine-java-flexibl?noredirect=1&lq=1

https://stackoverflow.com/questions/47759924/google-cloud-platform-app-engine-node-flexible-instance-ruby-sitting-at-50-ram

Logging X-Forwarded-For in jsonPayload

Hello,

Would you be willing to accept a change in nginx_proxy and fluentd_logger to log the X-Forwarded-For header into the jsonPayload?

Currently when using a reverse proxy like Cloudflare, only their IP is logged and some compliance schemes require the real visitor IP to be logged.

To avoid having to parse/transform the X-Forwarded-For header which would require different config for every setup, the original ticket requesting this for App Engine Standard suggested just logging the header in it's raw form, which would be enough to manually determine the visitor IP if eventually required for an audit.

Regards,
iamacarpet

Audit Fluentd configuration with wildcards "*" to avoid log duplication

The configuration file at https://github.com/GoogleCloudPlatform/appengine-sidecars-docker/blob/master/fluentd_logger/managed_vms.conf#L4 inlcudes wildcard "*" character. Per Fluentd's public documentation:

You should not use '*' with log rotation because it may cause the log duplication. In such case, you should separate in_tail plugin configuration.

Is there some log rotation mechanism enabled? Customers using this sidecar have observed duplicate logs. If so, we need to modify the configuration as suggested in https://github.com/fluent/fluentd/blob/27e84796ebdd7f4f74c584c0cab244c312536732/lib/fluent/plugin/in_tail.rb#L249.

internal.flushLog: Flush RPC: service bridge returned HTTP 400 ("App Engine APIs over the Service Bridge are disabled.\nIf they are required, enable them by setting the following to your app.yaml:\n\nbeta_settings:\n enable_app_engine_apis: true\n")

The following error for GAE Go Flexible is logged every second:

internal.flushLog: Flush RPC: service bridge returned HTTP 400 ("App Engine APIs over the Service Bridge are disabled.\nIf they are required, enable them by setting the following to your app.yaml:\n\nbeta_settings:\n enable_app_engine_apis: true\n")

The error is coming from:

https://github.com/GoogleCloudPlatform/appengine-sidecars-docker/blob/master/api_proxy/proxy.go#L15

Please note that app.yaml is no longer using any custom runtime:

runtime: go
api_version: go1.8
env: flexible

So, it should stop generating the logs above. Can we please stop logging the above message?

Error sending VM ready time metrics

Hi,

following message appears in the log:

ERROR vmagereceiver/vm_age_collector.go:151 Error sending VM ready time metrics {"component_kind": "receiver", "component_type": "vmage", "component_name": "vmage", "error": "rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written: Unknown metric: appengine.googleapis.com/flex/internal/vm_ready_time: timeSeries[0]

Everything was created automatically while setting up a server-side Container in Google Tag Manager. Then the server was switched to production. How could I fix this, and is this fixable on my side?

Thanks in advance!

Proxy settings for websockets

Hi, not sure this is monitored but giving it a go :)

I'm trying to deploy Shiny Docker applications through Google app Engine via flexible containers.

The documentation for Shiny Server states it requires these proxy settings to enable websockets to work:

http {

  map $http_upgrade $connection_upgrade {
      default upgrade;
      ''      close;
    }

  server {
    listen 80;
   
    location / {
      proxy_pass http://localhost:3838;
      proxy_redirect http://localhost:3838/ $scheme://$host/;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;
      proxy_read_timeout 20d;
    }
  }
}

The lines in particular are missing from this Docker file that I see is used in the flexible VM during debugging:

  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection $connection_upgrade;

An example application which displays the errors in the web console is here:
https://mark-edmondson-usa.appspot.com/sample-apps/hello/

The app engine files I'm deploying are here:
https://github.com/MarkEdmondson1234/appengine-shiny

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.