juju-solutions / bigtop Goto Github PK
View Code? Open in Web Editor NEWThis project forked from apache/bigtop
Mirror of Apache Bigtop
License: Apache License 2.0
This project forked from apache/bigtop
Mirror of Apache Bigtop
License: Apache License 2.0
Basically a s/state/flag/ + Endpoints refactor is needed throughout the bigtop stack.
Would it be possible to expose this config parameter in the charm config?
In some cases, e.g. development environment it makes sense to have this parameter set to true.
Network spaces support for bigtop charms.
hadoop-plugin is a subordinate that includes layer-apt. layer-apt now calls clear_removed_package_states
, which requires importing apt.apt_pkg:
https://git.launchpad.net/layer-apt/commit/?id=625d18edfbba37210adf9e0f198b7be4bbd7e1d8
The apt
module is available as a system-site package because python-apt
is installed. hadoop-plugin does not include system site packages, so it can't find the apt module:
http://paste.ubuntu.com/21799021/
It would be nice if the apt layer fixed this, but i'm not sure it's possible. I thought maybe the apt layer could simply include python-apt in a wheelhouse.txt, but it's not that simple:
http://paste.ubuntu.com/21799623/
As a workaround, we can force the plugin to use system deps with the include_system_packages: true
layer option.
I just created a zeppelin installation using "juju deploy zeppelin". Here's where I got stuck (the installation is created using a KVM based virtual machine managed by a MAAS pod):
ubuntu@v010203:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 477M 0 477M 0% /dev
tmpfs 100M 100M 0 100% /run
/dev/vda1 92G 6.5G 81G 8% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/119
tmpfs 100M 0 100M 0% /run/user/1000
ubuntu@v010203:~$ sudo du -hs /run/*
4.0K /run/acpid.pid
0 /run/acpid.socket
0 /run/agetty.reload
4.0K /run/atd.pid
8.0K /run/blkid
20K /run/cloud-init
4.0K /run/crond.pid
0 /run/crond.reboot
0 /run/dbus
0 /run/dmeventd-client
0 /run/dmeventd-server
0 /run/initctl
4.0K /run/initramfs
4.0K /run/irqbalance.pid
4.0K /run/iscsid.pid
0 /run/lock
2.5M /run/log
0 /run/lvm
4.0K /run/lvmetad.pid
0 /run/lxcfs
4.0K /run/lxcfs.pid
0 /run/lxd-bridge
8.0K /run/mdadm
0 /run/motd.dynamic.new
0 /run/mount
24K /run/network
4.0K /run/ntpd.pid
0 /run/puppet
12K /run/resolvconf
4.0K /run/rsyslogd.pid
0 /run/screen
0 /run/sendsigs.omit.d
0 /run/shm
0 /run/snapd-snap.socket
0 /run/snapd.socket
0 /run/sshd
4.0K /run/sshd.pid
0 /run/sudo
164K /run/systemd
0 /run/thermald
4.0K /run/tmpfiles.d
0 /run/ubuntu-fan
304K /run/udev
0 /run/user
4.0K /run/utmp
0 /run/uuidd
0 /run/xtables.lock
97M /run/zeppelin
From /var/log/zeppelin/zeppelin-zeppelin-v010203.log:
WARN [2018-05-06 17:35:36,994] ({main} ZeppelinConfiguration.java[create]:97) - Failed to load configuration, proceeding with a default
INFO [2018-05-06 17:35:37,291] ({main} ZeppelinConfiguration.java[create]:109) - Server Host: 0.0.0.0
INFO [2018-05-06 17:35:37,291] ({main} ZeppelinConfiguration.java[create]:111) - Server Port: 9080
INFO [2018-05-06 17:35:37,292] ({main} ZeppelinConfiguration.java[create]:115) - Context Path: /
INFO [2018-05-06 17:35:37,304] ({main} ZeppelinConfiguration.java[create]:116) - Zeppelin Version: 0.7.2
INFO [2018-05-06 17:35:38,030] ({main} Log.java[initialized]:186) - Logging initialized @47225ms
INFO [2018-05-06 17:35:38,563] ({main} ZeppelinServer.java[setupWebAppContext]:343) - ZeppelinServer Webapp path: /var/run/zeppelin/webapps
INFO [2018-05-06 17:35:43,421] ({main} ZeppelinServer.java[main]:187) - Starting zeppelin server
INFO [2018-05-06 17:35:43,426] ({main} Server.java[doStart]:327) - jetty-9.2.15.v20160210
WARN [2018-05-06 17:35:46,046] ({main} WebAppContext.java[doStart]:514) - Failed startup of context o.e.j.w.WebAppContext@44f75083{/,null,null}{/usr/lib/zeppelin/zeppelin-web-0.7.2.war}
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.eclipse.jetty.util.IO.copy(IO.java:162)
at org.eclipse.jetty.util.IO.copy(IO.java:118)
at org.eclipse.jetty.util.resource.JarResource.copyTo(JarResource.java:240)
at org.eclipse.jetty.webapp.WebInfConfiguration.unpack(WebInfConfiguration.java:468)
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:72)
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:163)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.server.Server.start(Server.java:387)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:354)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:189)
INFO [2018-05-06 17:35:46,463] ({main} AbstractConnector.java[doStart]:266) - Started ServerConnector@11e21d0e{HTTP/1.1}{0.0.0.0:9080}
INFO [2018-05-06 17:35:46,464] ({main} Server.java[doStart]:379) - Started @55741ms
INFO [2018-05-06 17:35:46,464] ({main} ZeppelinServer.java[main]:194) - Done, zeppelin server started
I looks to me that the default zeppelin directory defaults to /run which is created with a (small) size in Ubuntu 16.04 (nowadays?).
Log:
unit-kafka-0: 10:52:25 ERROR unit.kafka/0.juju-log kafka-client:6: Uncaught exception while in charm code:
Traceback (most recent call last):
File "./src/charm.py", line 320, in <module>
main(KafkaCharm)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/main.py", line 438, in main
_emit_charm_event(charm, dispatcher.event_name)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/main.py", line 150, in _emit_charm_event
event_to_emit.emit(*args, **kwargs)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/framework.py", line 355, in emit
framework._emit(event) # noqa
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/framework.py", line 856, in _emit
self._reemit(event_path)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/framework.py", line 931, in _reemit
custom_handler(event)
File "/var/lib/juju/agents/unit-kafka-0/charm/src/provider.py", line 120, in update_acls
self.kafka_auth.load_current_acls()
File "/var/lib/juju/agents/unit-kafka-0/charm/src/auth.py", line 91, in load_current_acls
acls = self._get_acls_from_cluster()
File "/var/lib/juju/agents/unit-kafka-0/charm/src/auth.py", line 43, in _get_acls_from_cluster
acls = KafkaSnap.run_bin_command(bin_keyword="acls", bin_args=command, opts=self.opts)
File "/var/lib/juju/agents/unit-kafka-0/charm/src/snap.py", line 130, in run_bin_command
raise e
File "/var/lib/juju/agents/unit-kafka-0/charm/src/snap.py", line 123, in run_bin_command
output = subprocess.check_output(
File "/usr/lib/python3.8/subprocess.py", line 415, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'KAFKA_OPTS=-Djava.security.auth.login.config=/var/snap/kafka/common//kafka-jaas.cfg kafka.acls --authorizer-properties zookeeper.connect=192.168.1.246:2182,192.168.1.83:2182,192.168.1.98:2182/kafka --list --zk-tls-config-file=/var/snap/kafka/common/server.properties' returned non-zero exit status 1.
When adding a relation to zookeeper, the charm gets into an error state, because it seems it cannot download from bigtop sources as the server certificate verification fails.
See logs at: http://paste.ubuntu.com/p/yVXmjS4Dc5/
I am attempting to deploy hadoop on arm64. As a test I tried the following command:
juju deploy --series xenial cs:~bigdata-charmers/hadoop-namenode --constraints arch=arm64
I have attached the log and I am seeing the following error in the log:
unit-hadoop-namenode-0-log.tar.gz
2017-10-18 19:43:34 DEBUG install Debug: Processing report from juju-4d3bd6-0.lxd with processor Puppet::Reports::Store
2017-10-18 19:43:34 DEBUG install Traceback (most recent call last):
2017-10-18 19:43:34 DEBUG install File "/var/lib/juju/agents/unit-hadoop-namenode-0/charm/hooks/install", line 19, in
2017-10-18 19:43:34 DEBUG install main()
2017-10-18 19:43:34 DEBUG install File "/usr/local/lib/python3.5/dist-packages/charms/reactive/init.py", line 78, in main
2017-10-18 19:43:34 DEBUG install bus.dispatch()
2017-10-18 19:43:34 DEBUG install File "/usr/local/lib/python3.5/dist-packages/charms/reactive/bus.py", line 423, in dispatch
2017-10-18 19:43:34 DEBUG install _invoke(other_handlers)
2017-10-18 19:43:34 DEBUG install File "/usr/local/lib/python3.5/dist-packages/charms/reactive/bus.py", line 406, in _invoke
2017-10-18 19:43:34 DEBUG install handler.invoke()
2017-10-18 19:43:34 DEBUG install File "/usr/local/lib/python3.5/dist-packages/charms/reactive/bus.py", line 280, in invoke
2017-10-18 19:43:34 DEBUG install self._action(*args)
2017-10-18 19:43:34 DEBUG install File "/var/lib/juju/agents/unit-hadoop-namenode-0/charm/reactive/namenode.py", line 73, in install_namenode
2017-10-18 19:43:34 DEBUG install bigtop.trigger_puppet()
2017-10-18 19:43:34 DEBUG install File "lib/charms/layer/apache_bigtop_base.py", line 705, in trigger_puppet
2017-10-18 19:43:34 DEBUG install java_home()),
2017-10-18 19:43:34 DEBUG install File "/usr/local/lib/python3.5/dist-packages/jujubigdata/utils.py", line 195, in re_edit_in_place
2017-10-18 19:43:34 DEBUG install with Path(filename).in_place(encoding=encoding) as (reader, writer):
2017-10-18 19:43:34 DEBUG install File "/usr/lib/python3.5/contextlib.py", line 59, in enter
2017-10-18 19:43:34 DEBUG install return next(self.gen)
2017-10-18 19:43:34 DEBUG install File "/usr/local/lib/python3.5/dist-packages/path.py", line 1452, in in_place
2017-10-18 19:43:34 DEBUG install os.rename(self, backup_fn)
2017-10-18 19:43:34 DEBUG install FileNotFoundError: [Errno 2] No such file or directory: Path('/etc/default/bigtop-utils') -> Path('/etc/default/bigtop-utils.bak')
2017-10-18 19:43:34 ERROR juju.worker.uniter.operation runhook.go:107 hook "install" failed: exit status 1
2017-10-18 19:43:34 INFO juju.worker.uniter resolver.go:100 awaiting error resolution for "install" hook
When deploying slave instances with lots of mem/cpu, it would be nice to be able to customize some of the yarn options to allow for more resources to be allocated to the executor.
Kafka goes into an error state after relating to zookeeper:
Executed commands:
juju deploy cs:kafka-40
juju deploy cs:zookeeper-42
juju add-relation kafka zookeeper
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed Traceback (most recent call last):
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/var/lib/juju/agents/unit-kafka-test-0/charm/hooks/zookeeper-relation-changed", line 19, in <module>
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed main()
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/usr/local/lib/python3.5/dist-packages/charms/reactive/__init__.py", line 113, in main
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed bus.dispatch(restricted=restricted_mode)
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/usr/local/lib/python3.5/dist-packages/charms/reactive/bus.py", line 364, in dispatch
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed _invoke(other_handlers)
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/usr/local/lib/python3.5/dist-packages/charms/reactive/bus.py", line 340, in _invoke
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed handler.invoke()
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/usr/local/lib/python3.5/dist-packages/charms/reactive/bus.py", line 162, in invoke
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed self._action(*args)
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/var/lib/juju/agents/unit-kafka-test-0/charm/reactive/kafka.py", line 43, in configure_kafka
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed kafka.configure_kafka(zks)
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "lib/charms/layer/bigtop_kafka.py", line 64, in configure_kafka
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed bigtop.trigger_puppet()
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "lib/charms/layer/apache_bigtop_base.py", line 721, in trigger_puppet
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed java_home()),
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/usr/local/lib/python3.5/dist-packages/jujubigdata/utils.py", line 195, in re_edit_in_place
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed with Path(filename).in_place(encoding=encoding) as (reader, writer):
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed return next(self.gen)
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed File "/usr/local/lib/python3.5/dist-packages/path.py", line 1452, in in_place
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed os.rename(self, backup_fn)
unit-kafka-test-0: 15:14:02 DEBUG unit.kafka-test/0.zookeeper-relation-changed FileNotFoundError: [Errno 2] No such file or directory: Path('/etc/default/bigtop-utils') -> Path('/etc/default/bigtop-utils.bak')
unit-kafka-test-0: 15:14:02 ERROR juju.worker.uniter.operation hook "zookeeper-relation-changed" failed: exit status 1
This feature request is a follow on from this previous jira issue: #https://issues.apache.org/jira/browse/ZOOKEEPER-2095 to add systemd support for Zookeeper. There is an attached patch file in the jira, however I believe we only need the systemd startup script.
Tested on distributed 3-node ubuntu instance with systemd zk.service, instructions from here: https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-an-apache-zookeeper-cluster-on-ubuntu-18-04
working systemd zk.service file:
`[Unit]
Description=Zookeeper Daemon
Documentation=http://zookeeper.apache.org
Requires=network.target
After=network.target
[Service]
Type=simple
WorkingDirectory=/var/lib/zookeeper
ExecStart=/usr/lib/zookeeper/bin/zkServer.sh start-foreground /etc/zookeeper/conf/zoo.cfg
ExecStop=/usr/lib/zookeeper/bin/zkServer.sh stop /etc/zookeeper/conf/zoo.cfg
ExecReload=/usr/lib/zookeeper/bin/zkServer.sh restart /etc/zookeeper/conf/zoo.cfg
Restart=on-failure
[Install]
WantedBy=default.target
`
ubuntu@juju-c35d5c:$ sudo systemctl status zk
● zk.service - Zookeeper Daemon
Loaded: loaded (/etc/systemd/system/zk.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-06-08 19:17:11 UTC; 380ms ago
Docs: http://zookeeper.apache.org
Process: 7489 ExecStop=/usr/lib/zookeeper/bin/zkServer.sh stop /etc/zookeeper/conf/zoo.cfg (code=exited, status=0/SUCCESS)
Main PID: 7502 (java)
Tasks: 11
Memory: 20.0M
CPU: 366ms
CGroup: /system.slice/zk.service
└─7502 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/lib/zookeeper/bin/../build/classes:/usr/lib/zookeeper/bin/../build/lib/*.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12
Thoughts?
I would like to use juju storage to mount my softraid provided by maas to $dfs.datanode.data.dir
I have data nodes with the os installed toa 64GB satadom, and 12 disks each for hdfs storage. I use maas to create a softraid for those disks. I would like juju storage to mount the raid as hdfs storage.
manythanks
Hi Guys,
These init.rc files no longer work and have errors when invoked by systemctl
Please replace:
su -s /bin/bash $SVC_USER -c "cd $WORKING_DIR && $EXEC_PATH --config '$CONF_DIR' start $DAEMON_FLAGS"
with
/sbin/runuser $SVC_USER -c "cd $WORKING_DIR && $EXEC_PATH --config '$CONF_DIR' start $DAEMON_FLAGS"
Cheers
Brett
In the apache-spark
charm, we had driver and executor config options:
https://github.com/juju-solutions/layer-apache-spark/blob/master/config.yaml#L8
These would allow the user to change the defaults (1g) post deployment. Why didn't this make it into the bigtop-spark charm?
If there's no good reason, let's bring those config opts back. Fwiw, the logic behind those options in apache-spark is here:
Hi,
While trying to move from apache-spark charm to cs:spark-34, we encountered some issues when deploying on a restricted network.
gcc is missing, and the hook install is trying to build netifaces module, failing with: http://paste.ubuntu.com/24494765/, "solved" by manually run: apt-get install build-essential
later it failed trying to fetch modules from puppetlabs: http://paste.ubuntu.com/24495398/
The model used for the deploy is configured to use a http(s) proxy in order to be able to reach the internet, but looks like the charm needs more than github access (besides the gcc issue)
When deploying kafka, I deployed the kafka-test-app and then removed it before it was completely set up. This put the kafka charm in an error state:
kafka/0* error idle 0 10.246.167.102 hook failed: "kafka-client-relation-broken"
ntp/0* active idle 10.246.167.102 123/udp chrony: Ready
kafka/1 waiting idle 1 10.246.166.233 Awaiting restart operation
ntp/1 active idle 10.246.166.233 123/udp chrony: Ready
kafka/2 waiting idle 2 10.246.166.8 Awaiting restart operation
ntp/2 active idle 10.246.166.8 123/udp chrony: Ready
tls-certificates-operator/0* active idle 3 10.246.166.140
zookeeper/0 active idle 4 10.246.164.162
zookeeper/1 active idle 5 10.246.164.244
zookeeper/2* active idle 6 10.246.167.95
In the debug-log we see:
unit-kafka-0: 19:40:57 DEBUG unit.kafka/0.juju-log kafka-client:11: cmd failed - cmd= charmed-kafka.configs --bootstrap-server=10.246.166.233:9093,10.246.167.102:9093,10.246.166.8:9093 --command-config=/var/snap/charmed-kafka/current/etc/kafka/client.properties --alter --entity-type=users --entity-name=relation-11 --delete-config=SCRAM-SHA-512, stdout=, stderr=Error while executing config command with args '--bootstrap-server=10.246.166.233:9093,10.246.167.102:9093,10.246.166.8:9093 --command-config=/var/snap/charmed-kafka/current/etc/kafka/client.properties --alter --entity-type=users --entity-name=relation-11 --delete-config=SCRAM-SHA-512'
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.ResourceNotFoundException: Attempt to delete a user credential that does not exist
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2096)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at kafka.admin.ConfigCommand$.alterUserScramCredentialConfigs(ConfigCommand.scala:464)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:427)
at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:326)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:97)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Caused by: org.apache.kafka.common.errors.ResourceNotFoundException: Attempt to delete a user credential that does not exist
unit-kafka-0: 19:40:57 ERROR unit.kafka/0.juju-log kafka-client:11: Uncaught exception while in charm code:
Traceback (most recent call last):
File "/var/lib/juju/agents/unit-kafka-0/charm/./src/charm.py", line 509, in <module>
main(KafkaCharm)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/main.py", line 441, in main
_emit_charm_event(charm, dispatcher.event_name)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/main.py", line 149, in _emit_charm_event
event_to_emit.emit(*args, **kwargs)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/framework.py", line 354, in emit
framework._emit(event)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/framework.py", line 830, in _emit
self._reemit(event_path)
File "/var/lib/juju/agents/unit-kafka-0/charm/venv/ops/framework.py", line 919, in _reemit
custom_handler(event)
File "/var/lib/juju/agents/unit-kafka-0/charm/src/provider.py", line 141, in _on_relation_broken
self.kafka_auth.delete_user(username=username)
File "/var/lib/juju/agents/unit-kafka-0/charm/src/auth.py", line 199, in delete_user
KafkaSnap.run_bin_command(bin_keyword="configs", bin_args=command)
File "/var/lib/juju/agents/unit-kafka-0/charm/src/snap.py", line 180, in run_bin_command
raise e
File "/var/lib/juju/agents/unit-kafka-0/charm/src/snap.py", line 173, in run_bin_command
output = subprocess.check_output(
File "/usr/lib/python3.10/subprocess.py", line 420, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command ' charmed-kafka.configs --bootstrap-server=10.246.166.233:9093,10.246.167.102:9093,10.246.166.8:9093 --command-config=/var/snap/charmed-kafka/current/etc/kafka/client.properties --alter --entity-type=users --entity-name=relation-11 --delete-config=SCRAM-SHA-512' returned non-zero exit status 1.
It looks like the charm is too eager to remove the credentials for the application that never finished setting up.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
$ conjure-up --version
conjure-up 2.5.1
$ juju --version
2.3.1-xenial-amd64
$ lxd.lxc --version
2.21
~$ sudo snap list
Name Version Rev Developer Notes
conjure-up 2.5.1-20180106.0201 919 canonical classic
core 16-2.30 3748 canonical core
juju 2.3.1 3106 canonical classic
lxd 2.21 5408 canonical -
Getting this error (https://paste.ubuntu.com/26336169/) when spark is deployed to LXD.
To reproduce:
conjure-up # use lxd/localhost provider
choose any bundle with spark and it should fail with this error.
The series in zookeeper metadata.yaml gets merged with the series of the base layer and we end up with a multiseries charm for xenial x2. Easy fix: remove series from metadata.yaml
In an LXC setup I couldn't write to a Kafka queue from the host machine. To make this clear, the host machine was 10.0.3.1 and Kafka was 10.0.3.209.
The reason was that the hostname advertised was not resolvable from 10.0.3.1. The fix was to add an entry to the /etc/hosts for the machine were Kafka is. I am pretty sure we have discussed this in the past.
the apache-spark sets a number of useful spark environment variables:
cat /etc/environment | grep -i spark
PATH="/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/spark/bin:/usr/lib/hadoop/bin:/usr/lib/hadoop/sbin"
SPARK_JAR="hdfs:///user/ubuntu/share/lib/spark-assembly.jar"
SPARK_DRIVER_MEMORY="1g"
SPARK_EXECUTOR_MEMORY="1g"
SPARK_CONF_DIR="/etc/spark/conf"
PYSPARK_DRIVER_PYTHON="ipython"
SPARK_HOME="/usr/lib/spark"
This is not the case for the bigtop spark charm
cat /etc/environment | grep -i spark
MASTER="spark://172.28.0.15:7077"
Is this a bug or is this intentional? This causes installation of Apache Toree to fail, because it uses these env vars to find spark, even though stuff like pyspark is in the PATH.
Steps to reproduce
INFO [2018-10-21 16:26:32,509] ({pool-2-thread-3} RemoteInterpreterManagedProcess.java[start]:126) - Run interpreter process [/usr/lib/zeppelin/bin/interpreter.sh, -d, /usr/lib/zeppelin/interpreter/spark, -p, 40223, -l, /usr/lib/zeppelin/local-repo/2ANGGHHMQ] INFO [2018-10-21 16:26:35,029] ({pool-2-thread-3} RemoteInterpreter.java[init]:221) - Create remote interpreter org.apache.zeppelin.spark.SparkInterpreter INFO [2018-10-21 16:26:35,295] ({pool-2-thread-3} RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:551) - Push local angular object registry from ZeppelinServer to remote interpreter group 2ANGGHHMQ:shared_process INFO [2018-10-21 16:26:35,319] ({pool-2-thread-3} RemoteInterpreter.java[init]:221) - Create remote interpreter org.apache.zeppelin.spark.PySparkInterpreter INFO [2018-10-21 16:26:35,345] ({pool-2-thread-3} RemoteInterpreter.java[init]:221) - Create remote interpreter org.apache.zeppelin.spark.SparkSqlInterpreter INFO [2018-10-21 16:26:35,349] ({pool-2-thread-3} RemoteInterpreter.java[init]:221) - Create remote interpreter org.apache.zeppelin.spark.DepInterpreter WARN [2018-10-21 16:27:28,142] ({pool-2-thread-3} NotebookServer.java[afterStatusChange]:2058) - Job 20150210-015259_1403135953 is finished, status: ERROR, exception: null, result:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.