Coder Social home page Coder Social logo

kafka-web-console's Introduction

Retired

This project is no longer supported. Please consider Kafka Manager instead.

Kafka Web Console

Kafka Web Console is a Java web application for monitoring Apache Kafka. With a modern web browser, you can view from the console:

  • Registered brokers

brokers


  • Topics, partitions, log sizes, and partition leaders

topics


  • Consumer groups, individual consumers, consumer owners, partition offsets and lag

topic


  • Graphs showing consumer offset and lag history as well as consumer/producer message throughput history.

topic


  • Latest published topic messages (requires web browser support for WebSocket)

topic feed


Furthermore, the console provides a JSON API described in RAML. The API can be tested using the embedded API Console accessible through the URL http://[hostname]:[port]/api/console.

Requirements

  • Play Framework 2.2.x
  • Apache Kafka 0.8.x
  • Zookeeper 3.3.3 or 3.3.4

Deployment

Consult Play!'s documentation for deployment options and instructions.

Getting Started

  1. Kafka Web Console requires a relational database. By default, the server connects to an embedded H2 database and no database installation or configuration is needed. Consult Play!'s documentation to specify a database for the console. The following databases are supported:

    • H2 (default)
    • PostgreSql
    • Oracle
    • DB2
    • MySQL
    • Apache Derby
    • Microsoft SQL Server

    Changing the database might necessitate making minor modifications to the DDL to accommodate the new database.

  2. Before you can monitor a broker, you need to register the Zookeeper server associated with it:

register zookeeper

Filling in the form and clicking on Connect will register the Zookeeper server. Once the console has successfully established a connection with the registered Zookeeper server, it can retrieve all necessary information about brokers, topics, and consumers:

zookeepers

Support

Please report any bugs or desired features.

kafka-web-console's People

Contributors

cjmamo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-web-console's Issues

"chroot" support

Hello!

It will be great to add support of "chroot" in Zookeeper's connection string.
For example localhost:2181/events

play run xxx

Oops: PatternSyntaxException
An unexpected error occured caused by exception PatternSyntaxException:
unbalanced parenthesis

This exception has been logged with id 6k0fgjebj

hi.
In accordance with your instructions to deploy the source code .Visit an exception. Do you know what this means ?

Accessing the graphs

I've just started the application and is working fine but...how can I access to the graphs info?

Error occurred during initialization of VM

something wrong with my install?
I cann't start the kafka-web-console,

/kafka-web-console/app/actors/ClientNotificationManager.scala:27: match may not be exhaustive.
[warn] It would fail on the following inputs: None, Some((x: Any forSome x not in (?, ?)))
[warn] val channel = Registry.lookupObject(PropertyConstants.BroadcastChannel) match {
[warn] ^
[warn] there were 4 feature warning(s); re-run with -feature for details
[warn] two warnings found

(Starting server. Type Ctrl+D to exit logs, the server will remain in background)

Error occurred during initialization of VM
java/lang/NoClassDefFoundError: java/lang/invoke/AdapterMethodHandle

Using postgresql

How can I use it on postgre?
Postgre have not type "Long". When I replaced long -> bigint, varchar -> varchar (30), int -> integer I got an error:
[ERROR] [03/20/2014 13:12:16.336] [application-akka.actor.default-dispatcher-3] [akka://application/user/$a] Exception while executing statement : ERROR: column zookeepers7.groupId does not exist
Position: 56
errorCode: 0, sqlState: 42703
Select
"zookeepers7"."host" as "zookeepers7_host",
"zookeepers7"."groupId" as "zookeepers7_groupId",
"zookeepers7"."name" as "zookeepers7_name",
"zookeepers7"."port" as "zookeepers7_port",
"zookeepers7"."chroot" as "zookeepers7_chroot",
"zookeepers7"."statusId" as "zookeepers7_statusId"
From
"zookeepers" "zookeepers7"
jdbcParams:[]
akka.actor.ActorInitializationException: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:218)
at akka.actor.ActorCell.create(ActorCell.scala:578)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
at akka.dispatch.Mailbox.run(Mailbox.scala:218)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

please support zookeeper non-root path

Right now, a zookeeper should be registered with host, port and other attributes separately, and then kafka-web-console will fetch all the information from that zookeeper. But what if I store the information in a non-root path, which is supported by kafka, like /kafka-meta?

Can't get any brokers or topics listed

We're not able to see anything listed under Brokers or Topics on kafka-web-console. We have two zookeeper nodes in the data center and they show up fine, but the info under the other tabs is just blank.

There's no logging being produced, either, so I'm not even sure how to debug this.

Suggestions?

Thanks!

-Jeff

Some additional info that might help shed some light:

screen shot 2014-09-10 at 4 20 49 pm
screen shot 2014-09-10 at 4 22 06 pm

And, in zookeeper, we have plenty of data.

...
├── kafka
│ ├── address_book
│ │ ├── partition_0
│ ├── apns
│ │ ├── partition_0
│ ├── admin
│ │ ├── delete_topics
│ ├── consumers
│ │ ├── kafkaspout
│ │ │ ├── offsets
│ │ │ │ ├── address_book
│ │ │ │ │ ├── 3
│ │ │ │ │ ├── 2
...

eliminate same brokers and topics

We have a zookeeper cluster, so we register all zookeeper servers in case any of them fails. Then there are duplicate brokers in the brokers tab. Things are the same with topics.

Is it possible that the duplicate information be eliminated?

Brokers list page is thowing an exception when keeping chroot empty

Zookeeper connected, but brokers list is empty.

Images attached,

screen shot 2014-12-09 at 17 28 58

screen shot 2014-12-09 at 17 29 27
As you can see calling /brokers.json is throwing an exception. Here is heroku server log.

screen shot 2014-12-09 at 17 35 33

It is worth mentioning that I can see a list of topics, but feed list is always empty as well.

screen shot 2014-12-09 at 17 31 29

screen shot 2014-12-09 at 17 31 52

This is the connection string zk.connect=localhost:2181 from /opt/kafka/config/server.properties.

how to build

I didnt find any document to build and configuration for kafka-web-console.Kindly provide the Installation in Readme.
Thanks
Mahesh.S

Schema for MySQL

If you want to use MySQL instead of H2 then you can use this:

CREATE TABLE zookeepers (
id BIGINT NOT NULL AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
host VARCHAR(255) NOT NULL,
chroot VARCHAR(255),
port BIGINT NOT NULL,
statusId BIGINT NOT NULL,
groupId BIGINT NOT NULL,
PRIMARY KEY (id),
UNIQUE (name)
);

CREATE TABLE status (
id BIGINT,
name VARCHAR(255),
PRIMARY KEY (id)
);

CREATE TABLE groups (
id BIGINT,
name VARCHAR(255),
PRIMARY KEY (id)
);

CREATE TABLE offsetHistory (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
zookeeperId BIGINT,
topic VARCHAR(255),
FOREIGN KEY (zookeeperId) REFERENCES zookeepers(id),
UNIQUE (zookeeperId, topic)
);

CREATE TABLE offsetPoints (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
consumerGroup VARCHAR(255),
timestamp TIMESTAMP,
offsetHistoryId BIGINT,
partition BIGINT,
offset BIGINT,
logSize BIGINT,
FOREIGN KEY (offsetHistoryId) REFERENCES offsetHistory(id)
);

CREATE TABLE settings (
key_ VARCHAR(255) PRIMARY KEY,
value VARCHAR(255)
);

INSERT INTO status (id, name) VALUES (0, 'CONNECTING');
INSERT INTO status (id, name) VALUES (1, 'CONNECTED');
INSERT INTO status (id, name) VALUES (2, 'DISCONNECTED');
INSERT INTO status (id, name) VALUES (3, 'DELETED');

INSERT INTO groups (id, name) VALUES (0, 'ALL');
INSERT INTO groups (id, name) VALUES (1, 'DEVELOPMENT');
INSERT INTO groups (id, name) VALUES (2, 'PRODUCTION');
INSERT INTO groups (id, name) VALUES (3, 'STAGING');
INSERT INTO groups (id, name) VALUES (4, 'TEST');

INSERT INTO settings (key_, value) VALUES ('PURGE_SCHEDULE', '0 0 0 ? * SUN *');
INSERT INTO settings (key_, value) VALUES ('OFFSET_FETCH_INTERVAL', '30');

Ask you a question

how to install it
I download playframework1.2.7 and run it,but I get errors

Oops: PatternSyntaxException

An unexpected error occured caused by exception PatternSyntaxException:
please tell me how to install it,thanks very much。
you can provide deploy of package?I don't know how to compile the source code

topic:can't get information

hi:
when i click topic tab,the topic information is lost,and i check the log,
the command message below:
[debug] application - Getting partition offsets for topic test
[debug] application - Getting partition leaders for topic test
[debug] application - Getting partition leaders for topic test
[debug] application - Getting partition log sizes for topic test from partition leaders server-03:9092
[warn] application - Could not connect to partition leader server-03:9092. Error message: empty.head

kafka

Topics list

Hi,

I managed to start the application after adapting a bit the evolutions scripts for the db, successfully registered one zookeepers but nothing appears in the on the Topics page.
Here's what I have in the logs:

! @6ii9ko0jf - Internal server error, for (GET) [/topics.json] ->

play.api.Application$$anon$1: Execution exception[[UnsupportedOperationException: > empty.head]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10-2.2.1.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10-2.2.1.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.2.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.2.jar:na]
Caused by: java.lang.UnsupportedOperationException: empty.head
at scala.collection.immutable.Vector.head(Vector.scala:192) ~[scala-library-2.10.2.jar:na]
at common.Util$$anonfun$getPartitionsLogSize$2$$anonfun$apply$16$$anonfun$5.apply(Util.scala:56) ~[classes/:na]
at common.Util$$anonfun$getPartitionsLogSize$2$$anonfun$apply$16$$anonfun$5.apply(Util.scala:56) ~[classes/:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:253) [scala-library-2.10.2.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:249) [scala-library-2.10.2.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:29) [scala-library-2.10.2.jar:na]

Do you know where the problem could come from?

Topic feed not working

Hi,

I have installed web console on centos. I have created topics and producer I can see the messages in terminal consumers but no able to see topic feed on web console. I am not sure id this is websocket issue.

curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: localhost" -H "Origin: http://localhost:9000" http://127.0.0.1:9000/#/topics/apache_log/webevent
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 4323

if i do

curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: echo.websocket.org" -H "Origin: http://www.websocket.org" http://echo.websocket.org
HTTP/1.1 101 Web Socket Protocol Handshake
Upgrade: WebSocket
Connection: Upgrade
WebSocket-Origin: http://www.websocket.org
WebSocket-Location: ws://echo.websocket.org/
Server: Kaazing Gateway

Also I get this error on kafka broker log when I refresh topic feed page.

INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor)
[2014-04-12 06:30:25,801] ERROR Closing socket for /127.0.0.1 because of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer

Thanks.
Manish.

ERROR Closing socket and web-console-consumer-XXXXX

Hi,

Kafka gives an error whenever I refresh 'consumer group' page.
The full error log shows below.

ERROR Closing socket for /10.0.2.83 because of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at kafka.utils.Utils$.read(Utils.scala:375)
at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
at kafka.network.Processor.read(SocketServer.scala:347)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:745)

Another problem I found is it creates many consumers at zookeeper.
under /consumers

[web-console-consumer-81258, web-console-consumer-40995, web-console-consumer-15923, web-console-consumer-2305, web-console-consumer-83410, web-console-consumer-45966, web-console-consumer-82765, web-console-consumer-49384, web-console-consumer-68405, web-console-consumer-93714, web-console-consumer-9685, web-console-consumer-17315, web-console-consumer-80814, web-console-consumer-18646, web-console-consumer-60258, web-console-consumer-53349, web-console-consumer-47546, web-console-consumer-13959, logstash, web-console-consumer-63632]

Is this normal?

Thanks.


environmental info

kafka_2.9.2-0.8.1.1
latest kafka-web-console-master

Show the latest consume info in front

Now, the topic consume info is appending to the end in Topic Feed page, when I watch it, I need to use the scroll bar, kept down the page, this is not convenient.

Can't connect to zookeeper

requirements Zookeeper 3.3.3, but I use kafka_2.9.2-0.8.1, and its zookeeper version is 3.3.4. Wile I register zookeeper, the status always is connecting...

Compilation failed

Good day!
I got following error when start:

[kafka-web-console] $ start
[info] Compiling 28 Scala sources and 2 Java sources to /home/vyacheslav/git-repos/kafka-web-console/target/scala-2.10/classes...
[warn] /home/vyacheslav/git-repos/kafka-web-console/app/kafka/consumer/async/ConsumerFetcherManager.scala:50: imported `PartitionTopicInfo' is permanently hidden by definition of object PartitionTopicInfo in package async
[warn] import kafka.consumer.PartitionTopicInfo
...............
[error]
[error] last tree to typer: Literal(Constant(play.api.templates.Html))
[error] symbol: null
[error] symbol definition: null
[error] tpe: Class(classOf[play.api.templates.Html])
[error] symbol owners:
[error] context owners: anonymous class anonfun$f$1 -> package html
[error]
[error] == Enclosing template or block ==
[error]
[error] Template( // val <local $anonfun>: , tree.tpe=views.html.anonfun$f$1
[error] "scala.runtime.AbstractFunction0", "scala.Serializable" // parents
[error] ValDef(
[error] private
[error] "_"
[error]
[error]
[error] )
[error] // 3 statements
[error] DefDef( // final def apply(): play.api.templates.Html
[error] final
[error] "apply"
[error] []
[error] List(Nil)
[error] // tree.tpe=play.api.templates.Html
[error] Apply( // def apply(): play.api.templates.Html in object index, tree.tpe=play.api.templates.Html
[error] index.this."apply" // def apply(): play.api.templates.Html in object index, tree.tpe=()play.api.templates.Html
[error] Nil
[error] )
[error] )
[error] DefDef( // final def apply(): Object
[error] final
[error] "apply"
[error] []
[error] List(Nil)
[error] // tree.tpe=Object
[error] Apply( // final def apply(): play.api.templates.Html, tree.tpe=play.api.templates.Html
[error] index$$anonfun$f$1.this."apply" // final def apply(): play.api.templates.Html, tree.tpe=()play.api.templates.Html
[error] Nil
[error] )
[error] )
[error] DefDef( // def (): views.html.anonfun$f$1
[error]
[error] ""
[error] []
[error] List(Nil)
[error] // tree.tpe=views.html.anonfun$f$1
[error] Block( // tree.tpe=Unit
[error] Apply( // def (): scala.runtime.AbstractFunction0 in class AbstractFunction0, tree.tpe=scala.runtime.AbstractFunction0
[error] index$$anonfun$f$1.super."" // def (): scala.runtime.AbstractFunction0 in class AbstractFunction0, tree.tpe=()scala.runtime.AbstractFunction0
[error] Nil
[error] )
error
[error] )
[error] )
[error] )
[error]
[error] == Expanded type of tree ==
[error]
[error] ConstantType(value = Constant(play.api.templates.Html))
[error]
[error] uncaught exception during compilation: java.io.IOException
[error] File name too long
[warn] two warnings found
[error] two errors found
error Compilation failed

User security

It would be great to have some form of user security to limit access to the system

Kafka Web Console release v2.0.0 is creating a high number of open file handles (against Kafka 0.8.1.1, ZooKeeper 3.3.4)

I'm running Kafka Web Console release v2.0.0 against Kafka 0.8.1.1 and ZooKeeper 3.3.4

I'm consistently seeing the number of open file handles increasing when I launch Kafka Web Console after navigating to a topic on Zookeeper.
Once the file handles start to increase, they increase without any more navigation being done in the browser - meaning I only need to launch the web console and do nothing else beside monitor the number of open files and I'll see it increase every few seconds.
I've confirmed there are no other producers or consumers connecting to Kafka or Zookeeper.

After this runs for a while you'll get either of these errors:

  • Run a Kafka command like this:
$INSTALLDIR/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 4 --topic test2

You'll get an error like this:

Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 0; nested exception is:
    java.net.BindException: Address already in use
  • Java clients might get an error like this (due to "Too many open files"):
java.io.FileNotFoundException: /src1/fos/dev-team-tools/var/kafka/broker-0/replication-offset-checkpoint.tmp

The ulimit for the id that my Kafka process runs under has a very large value for the "open files".

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 610775
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 500000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 610775
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Note, I've also tried this Pull Request from @ibanner56 ( #40) which is related to these issues (#36 and #37 from @mungeol) but it did not fix the issue.

To reproduce on Linux do the following.

  1. Launch ZooKeeper
  2. Launch Kafka
  3. Create a topic with 4 partitions with 1 replication...
    $INSTALLDIR/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 4 --topic test2
  4. Open a Putty session and run this script in that window
while [[ 1 == 1 ]]; do
  date
  echo "zookeeper: $(ls -ltr /proc/`ps -ef |grep zookeeper.server|grep -v grep|awk '{print $2}'`/fd |wc -l)"
  echo "Kafka: $(ls -ltr /proc/`ps -ef |grep kafka.Kafka     |grep -v grep|awk '{print $2}'`/fd |wc -l)"
  echo ""      
  sleep 5;
done
  1. Launch Kafka Web Console
  2. Browse to a topic
  3. Notice the number of "Kafka" connections in the Putty session should increase
  4. Wait several seconds. Notice the number of "Kafka" connections in the Putty session should increase again, without doing anything.
    Sample output from the script in #4 after running for a couple of hours (with 8 topics defined on the Zookeeper instance, 1 replication each, 4 partitions each).
Wed Jan 21 18:44:29 EST 2015
zookeeper: 37
Kafka: 6013

Wed Jan 21 18:44:34 EST 2015
zookeeper: 37
Kafka: 6013

Wed Jan 21 18:44:39 EST 2015
zookeeper: 37
Kafka: 6045

...

Wed Jan 21 18:51:23 EST 2015
zookeeper: 37
Kafka: 6461

open files continually grow

I am testing this console by monitoring a kafka cluster with ~20 topics, each with ~3 consumer groups.

The number of open files keeps getting larger and larger.

After running for ~15 minutes, kafka-web-console has over 92,000 open files.

The number of open files on each kafka node is also increasing. Each one is ~13,000 open files.

I have set the 'offset-fetch-interval' to '600'.

It appears that the sockets never close.

Topic display always errors out

got latest, and connected to the play app. I added a couple ZK hosts and they say "connected" in the zookeeper list page.

When l click on brokers, after 5 seconds I get an error in the play console

! @6jd9k2m2f - Internal server error, for (GET) [/topics.json] ->

play.api.Application$$anon$1: Execution exception[[ConnectionLossException: KeeperErrorCode = ConnectionLoss for /brokers/topics]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10-2.2.1.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10-2.2.1.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library.jar:na]
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /brokers/topics

are there any timeouts I can tweak? any other pointers?

Application Package

Hi,

when I try to package the application with the play framework ("play dist" or "play universal:package-zip-tarball") [1] the kafka-web-console package doesn't work fine. I mean, it starts, connects with db but there are missing all the views from the content (there isn't the zookeepers form, neither the brokers page view neither the topics page view). Just the header and side bar with the three links (Zookeeper, Brokers and Topics)

I'm new at play framework and scala but it seems that something is not being linked correctly in order to be packaged by play framework.

Regards.
Dani

[1] http://www.playframework.com/documentation/2.2.x/ProductionDist

URL prefix?

Is there a way to run kafka-web-console with a URL prefix? E.g. so that all urls and requests start with something like /kafka-web? This would allow us to proxy traffic to it from behind an https server that we use for restricting access into prod (we proxy to other web consoles under other URLs.)

Right now, requests are full-path, i.e.:
link rel="stylesheet" media="screen" href="/assets/stylesheets/custom.css"

Thanks,
Jeff

Play service crashing after extended period with window open.

Service crashes with the following stack strace:

Uncaught error from thread [play-akka.actor.default-dispatcher-800] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[play]
java.lang.NoClassDefFoundError: common/Util$$anonfun$getPartitionsLogSize$3$$anonfun$apply$19$$anonfun$apply$1$$anonfun$applyOrElse$1
    at common.Util$$anonfun$getPartitionsLogSize$3$$anonfun$apply$19$$anonfun$apply$1.applyOrElse(Util.scala:82)
    at common.Util$$anonfun$getPartitionsLogSize$3$$anonfun$apply$19$$anonfun$apply$1.applyOrElse(Util.scala:81)
    at scala.runtime.AbstractPartialFunction$mcJL$sp.apply$mcJL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcJL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcJL$sp.apply(AbstractPartialFunction.scala:25)
    at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
    at scala.util.Try$.apply(Try.scala:161)
    at scala.util.Failure.recover(Try.scala:185)
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:387)
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:387)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:29)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.ClassNotFoundException: common.Util$$anonfun$getPartitionsLogSize$3$$anonfun$apply$19$$anonfun$apply$1$$anonfun$applyOrElse$1
    at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 23 more
Caused by: java.io.FileNotFoundException: /home/ubuntu/app/kafka-web-console/target/scala-2.10/classes/common/Util$$anonfun$getPartitionsLogSize$3$$anonfun$apply$19$$anonfun$apply$1$$anonfun$applyOrElse$1.class (Too many open files)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(FileInputStream.java:146)
    at sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1086)
    at sun.misc.Resource.cachedInputStream(Resource.java:77)
    at sun.misc.Resource.getByteBuffer(Resource.java:160)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:436)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    ... 29 more

Here is the function it seems to be failing in, as mentioned at the top of the stack trace (from Util.scala):

  def getPartitionsLogSize(topicName: String, partitionLeaders: Seq[String]): Future[Seq[Long]] = {
    Logger.debug("Getting partition log sizes for topic " + topicName + " from partition leaders " + partitionLeaders.mkString(", "))
    return for {
      clients <- Future.sequence(partitionLeaders.map(addr => Future((addr, Kafka.newRichClient(addr)))))
      partitionsLogSize <- Future.sequence(clients.zipWithIndex.map { tu =>
        val addr = tu._1._1
        val client = tu._1._2
        var offset = Future(0L)
        if (!addr.isEmpty) {
          offset = twitterToScalaFuture(client.offset(topicName, tu._2, OffsetRequest.LatestTime)).map(_.offsets.head).recover {
            case e => Logger.warn("Could not connect to partition leader " + addr + ". Error message: " + e.getMessage); 0L
          }
        }

        client.close()
        offset
      })
    } yield partitionsLogSize
  }

logger settings in application.conf are ignored

Hello,

I was having a bit of trouble getting rid of the debug messages (from Util.scala). I've tried to add an application-logger.xml, but that didn't help until I haven't specified -Dlogger.resource=/pat/to/that.xml.

So, I'd recommend putting a nice XML into the repository and specifying application.conf as the a property file in the XML. 1

Thanks,
Pas

deploy fail

hi,I am not familiar with play!
when i use "play run" cmd ,everything is ok.
but when i want to deploy the project,
i use "play clean stage" then start.
the project can be started.
but when i visit on the browser, there is a 404.

what's wrong?

Dependency Issue ? Jars not found

Java 1.7
OSX 10.9

[kafka-web-console] $ start

[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::              FAILED DOWNLOADS            ::
[warn]  :: ^ see resolution messages for details  ^ ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: log4j#log4j;1.2.15!log4j.jar
[warn]  :: javax.mail#mail;1.4!mail.jar
[warn]  :: javax.activation#activation;1.1!activation.jar
[warn]  :: jline#jline;0.9.94!jline.jar
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[trace] Stack trace suppressed: run last *:update for the full output.
[error] (*:update) sbt.ResolveException: download failed: log4j#log4j;1.2.15!log4j.jar
[error] download failed: javax.mail#mail;1.4!mail.jar
[error] download failed: javax.activation#activation;1.1!activation.jar
[error] download failed: jline#jline;0.9.94!jline.jar

Cannot start with errors.

Thanks,
Mark

Build info

Hello,
I would like to try this kafka web console. However i do not find any build and deploy instructions in readme or wiki. Can you provide it.

Restart Zookeeper problem

After restarting one of the ZK's in my cluster, connecting the kafka-web-console automatically is problem.It's state always "connecting" and never connects.
Is there any reason or solution?

Is this tool supporting out-of-box monitoring?

I get this tool work fine on my DEV cluster, however, I do not want to install it on my production server. I am thinking to install it on a Vagrant machine, remotely monitor production server (out of box), is that possible?

I am not quite familiar with play framework, I am wondering how it collect the metrics from kafka and display on the web with VM ip. I assume there should be somewhere I can config the production IP that this tool can point to.

Thanks

AL

Topic Feed hangs browser

For a topic with high volume this uses a huge amount of memory. Maybe there's a way to only show the last 'X', or provide some sort of paging capability?

Cannot see graphs showing consumer offset and lag history

Hi,

I was able to run the application but unable to see the graphs showing consumer offset and lag history.
I tried Kafka Topics -> Clicked on one of the Topic row -> Blank Page comes up.

Also, my Topic feeds are also empty.

Update:: I figured that OffsetPoint table is never getting the data, because the insert to this table is failing! Though I don't see any error in the log. I am using MYSQL as Database.

Change kafka-web-console port

How i can change the default port (9000) to other (8080 for example)? I can't find a configuration file where change it.

thanks.

Count not connect to partition leader localhost:9092 (connection refused)

The web-console is working fine but it shows these errors each fetch interval:

[debug] application - Getting partition log sizes for topic Example from partition leaders localhost:9092, localhost:9092
[debug] application - Getting partition log sizes for topic AnotherExample from partition leaders localhost:9092, localhost:9092
[warn] application - Count not connect to partition leader localhost:9092. Error message: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9092
[warn] application - Count not connect to partition leader localhost:9092. Error message: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9092
[warn] application - Count not connect to partition leader localhost:9092. Error message: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9092
[warn] application - Count not connect to partition leader localhost:9092. Error message: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9092

I'm running zookeeper and kafka on the same machine.
When I start kafka-server, the output is showing leader fine:

[2014-07-09 10:43:45,814] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2014-07-09 10:43:45,856] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2014-07-09 10:43:46,371] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2014-07-09 10:43:46,377] INFO Registered broker 0 at path /brokers/ids/0 with address localhost:9092. (kafka.utils.ZkUtils$)
[2014-07-09 10:43:46,400] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2014-07-09 10:43:46,910] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [Example,1],[AnotherExample,1],[AnotherExample,0],[Example,0] (kafka.server.ReplicaFetcherManager)
[2014-07-09 10:43:47,007] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [Example,1],[AnotherExample,1],[AnotherExample,0],[Example,0] (kafka.server.ReplicaFetcherManager)

Is this a problem with web-console app, or my kafka install?

Performance Monitoring

Hi - based purely on the screenshots (haven't tried it out yet), is there any sort of viewing of performance information? If not, that would be cool, especially if it could graph it over time?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.