Coder Social home page Coder Social logo

ansible-zookeeper's Introduction

ansible-zookeeper

Build Status

ZooKeeper playbook for Ansible

Installation

ansible-galaxy install AnsibleShipyard.ansible-zookeeper

Dependencies

Java

Requirements

Ansible version at least 1.6

Role Variables

---
zookeeper_version: 3.4.12
zookeeper_url: http://www.us.apache.org/dist/zookeeper/zookeeper-{{zookeeper_version}}/zookeeper-{{zookeeper_version}}.tar.gz

# Flag that selects if systemd or upstart will be used for the init service:
# Note: by default Ubuntu 15.04 and later use systemd (but support switch to upstart)
zookeeper_debian_systemd_enabled: "{{ ansible_distribution_version|version_compare(15.04, '>=') }}"
zookeeper_debian_apt_install: false
# (Optional:) add custom 'ppa' repositories depending on the distro version (only with debian_apt_install=true)
# Example: to use a community zookeeper v3.4.8 deb pkg for Ubuntu 14.04 (where latest official is v3.4.5)
zookeeper_debian_apt_repositories:
  - repository_url: "ppa:ufscar/zookeeper"
    distro_version: "14.04"

apt_cache_timeout: 3600
zookeeper_register_path_env: false

client_port: 2181
init_limit: 5
sync_limit: 2
tick_time: 2000
zookeeper_autopurge_purgeInterval: 0
zookeeper_autopurge_snapRetainCount: 10
zookeeper_cluster_ports: "2888:3888"
zookeeper_max_client_connections: 60

data_dir: /var/lib/zookeeper
log_dir: /var/log/zookeeper
zookeeper_dir: /opt/zookeeper-{{zookeeper_version}} # or /usr/share/zookeeper when zookeeper_debian_apt_install is true
zookeeper_conf_dir: {{zookeeper_dir}} # or /etc/zookeeper when zookeeper_debian_apt_install is true
zookeeper_tarball_dir: /opt/src

zookeeper_hosts_hostname: "{{inventory_hostname}}"
# List of dict (i.e. {zookeeper_hosts:[{host:,id:},{host:,id:},...]})
zookeeper_hosts:
  - host: "{{zookeeper_hosts_hostname}}" # the machine running
    id: 1

# Dict of ENV settings to be written into the (optional) conf/zookeeper-env.sh
zookeeper_env: {}

# Controls Zookeeper myid generation
zookeeper_force_myid: yes

Example Playbook

- name: Installing ZooKeeper
  hosts: all
  sudo: yes
  roles:
    - role: AnsibleShipyard.ansible-zookeeper

Example Retrieving Tarball From S3

- name: Installing ZooKeeper
  hosts: all
  sudo: yes
  vars:
    zookeeper_archive_s3_bucket: my-s3-bucket
    zookeeper_archive_s3_object: my/s3/directory/zookeeper-{{zookeeper_version}}.tar.gz
  roles:
    - role: AnsibleShipyard.ansible-zookeeper

Cluster Example

- name: Zookeeper cluster setup
  hosts: zookeepers
  sudo: yes
  roles:
    - role: AnsibleShipyard.ansible-zookeeper
      zookeeper_hosts: "{{groups['zookeepers']}}"

Assuming zookeepers is a hosts group defined in inventory file.

[zookeepers]
server[1:3]

Custom IP per host group

zookeeper_hosts: "
    {%- set ips = [] %}
    {%- for host in groups['zookeepers'] %}
    {{- ips.append(dict(id=loop.index, host=host, ip=hostvars[host]['ansible_default_ipv4'].address)) }}
    {%- endfor %}
    {{- ips -}}"

See this sample playbook which shows how to use this playbook as well as others. It is part of ansible-galaxy-roles and serves as a curation (and thus an example) of all our ansible playbooks.

License

The MIT License (MIT)

Copyright (c) 2014 Kien Pham

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

AnsibleShipyard

Our related playbooks

  1. ansible-mesos
  2. ansible-marathon
  3. ansible-chronos
  4. ansible-zookeeper

Author Information

@AnsibleShipyard/developers and others.

ansible-zookeeper's People

Contributors

darwiner avatar davidwittman avatar dzeban avatar eladamitpxi avatar ernestas-poskus avatar ernoaapa avatar jasongiedymin avatar jordiclariana avatar kknd22 avatar lhoss avatar maqdev avatar mhamrah avatar runsfor avatar veger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-zookeeper's Issues

Zookeeper doesn't stop or restart properly on CentOS 6

I was looking into this and the upstart script was the cause; using su -c means that when the daemon is killed or restarted, only the 'su' process gets killed but the daemon itself does not, so it sits hanging on the :2181 port and then the new daemon instance can't run.

The fix I used which requires installing an the daemonize package was this change to the end of zookeeper.conf.j2:

...
expect daemon

script
    exec daemonize -u zookeeper -p /var/run/zookeeper.pid -o /var/log/zookeeper/init-zookeeper.log {{zookeeper_dir}}/bin/zkServer.sh start-foreground
end script

Although it would require tasks/RedHat.yml to be changed to add 'daemonize' to the 'Install OS Packages' list.

I actually fixed it with my own tasks overriding the defaults.

Exception str object has no attribute host

One of the tasks fails. I'm using the role like this:
- { role: 'ansible-zookeeper', zookeeper_hosts: "{{ groups.mesos_masters }}", tags: ['zookeeper'] }

TASK: [ansible-zookeeper | Overwrite default config file] *********************
fatal: [mesos-master] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'str object' has no attribute 'host'", 'failed': True}

The issue lies within groups.mesos_masters which contains the following list: ['mesos-master', 'mesos-master-0c1', 'mesos-master-7e0'] so it seems we have to call the role with different parameters.
Still investigating more.

Edit I changed the code in zoo.cfg.j2 to:
from {{ server.host }} to {{ hostvars[server]['ansible_hostname'] }}

Debian/Ubuntu tasks setup outdated zookeeper (ubuntu 14.04 repos latest version is 3.4.5)

We need to setup at least v3.4.6, better latest 3.4.8 as in Ubuntu 14.04 the motivation seems low to provide a recent apt package, see: https://answers.launchpad.net/ubuntu/+source/zookeeper/+question/289577

Intrestingly, an older version of this script, supported later zookeeper version, by installing from the official binaries (like it's still active for the redhat tasks)
Commit that changed this: a6160aa

Is there a chance to go back to using the official archive? (or to accept such a PR ?)

zookeeper version 3.4.9 is unavailable

Need to update the zookeeper version to something other than 3.4.9. This version is unavailable. and the role fails when we run the playbook.

Please update to the latest (3.4.11) or provide a way to override the version.

Created a pull req: #74

[New Feature] Testing with molecule (in docker containers)

Main motivation: reduce testing boilerplate and hacks..

Status

Here's my initial work (travis not yet updated to use molecule):
https://github.com/AnsibleShipyard/ansible-zookeeper/compare/master...teralytics:molecule_docker_tests?expand=1

PR: planned soon, after resolving tasks:

  • fix ansible-lint issues (see #59)
  • test the actual testinfra tests
  • cleanups

Howto

molecule test

and the 3 containers will be created, java,zookeeper deployed and will stop after the above ansible-lint warnings

The conditional check 'not zookeeper_debian_systemd_enabled' failed.

Hi, I have a problem with the ansible-zookeeper installation. The logs are below:

fatal: [humio1]: FAILED! => {"msg": "The conditional check 'not zookeeper_debian_systemd_enabled' failed. The error was: An unhandled exception occurred while templating '{{ _ubuntu_1504 or _debian_8 }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ ansible_distribution == 'Debian' and ansible_distribution_version|version_compare(8.0, '>=') }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: no filter named 'version_compare'. String: {{ ansible_distribution == 'Debian' and ansible_distribution_version|version_compare(8.0, '>=') }}\n\nThe error appears to be in '/root/.ansible/roles/AnsibleShipyard.ansible-zookeeper/tasks/upstart.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Check if /etc/init exists\n ^ here\n"}

My upstart.yaml :


  • name: Check if /etc/init exists
    stat: path=/etc/init/
    register: etc_init

  • name: Upstart script.
    template: src=zookeeper.conf.j2 dest=/etc/init/zookeeper.conf
    when:

    • etc_init.stat.exists == true
    • ansible_service_mgr != 'systemd'
      notify:
    • Restart zookeeper

lsb_release -a results:

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.6 LTS
Release: 16.04
Codename: xenial

I am waiting for your help thanks in advance

[New Feature] configurable autopurge for zookeeper snapshots and logs

A very useful zookeeper feature, to avoid filling up too much your zookeeper data+log dirs:
A reasonable config to add in zoo.cfg would be for ex:

+autopurge.purgeInterval=24
+autopurge.snapRetainCount=10

I propose to make this configurable, introducing 2 new role variables (that keep the current default, to not use autopurge):

zookeeper_autopurge_purgeInterval=0
zookeeper_autopurge_snapRetainCount=10

Again, I'm ready to contribute this. Before creating the PR, happy for comments?
@ernestas-poskus ?

Tag release

Can you start tagging releases in this role like you do the others? Thanks!

How do you write your hosts?

I'm getting this error:

fatal: [ec2-54-204-214-172.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: ERROR! 'unicode object' has no attribute 'host'"}
...

My hosts file is:

[mesos_primaries]
ec2-54-204-214-172.compute-1.amazonaws.com zoo_id=1 consul_bootstrap=true
ec2-54-235-59-210.compute-1.amazonaws.com zoo_id=2
ec2-54-83-161-83.compute-1.amazonaws.com zoo_id=3

How do I write my hosts file so it acknowledges the host name?

systemd - ZK starting up before the network is ready and available after a server reboot

In ansible-zookeeper/templates/zookeeper.service.j2

It might be a good idea to change the following:

[Unit]
Description=ZooKeeper

To:

[Unit]
Description=ZooKeeper
After=network.target
Wants=network.target

Everything works fine with a server that is already up and running, but in the case of rebooting a server, ZK starts too early, before the network is even available, leading to errors about not being able to resolve other cluster member hostnames, or timing out while trying to reach them.

Zookeeper not starting

Hi,

I'm running the playbook.rml for ansible-mesos-playbook which installs ansible-zookeeper as one of the roles. It runs through the Redhat.yml file creating data folders, log folders, setting the upstart script etc. When it comes to running/starting the zookeeper service it errors out as below. Any idea why this is?

TASK: [ansible-zookeeper | Update apt cache] **********************************
skipping: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Apt install required system packages.] *************
skipping: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Overwrite myid file.] ******************************
skipping: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Overwrite default config file] *********************
skipping: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Restart zookeeper] *********************************
skipping: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | file path=/opt/src state=directory] ****************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | file path={{zookeeper_dir}} state=directory] *******
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Download zookeeper version.] ***********************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Install OS Packages] *******************************
ok: [svdpdac015.techlabs.accenture.com] => (item=libselinux-python)

TASK: [ansible-zookeeper | Unpack tarball.] ***********************************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | group name=zookeeper system=yes] *******************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | user name=zookeeper group=zookeeper system=yes] ****
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Change ownership on zookeeper directory.] **********
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Create zookeeper data folder.] *********************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Create zookeeper logs folder.] *********************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Upstart script.] ***********************************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Write myid file.] **********************************
ok: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Configure zookeeper] *******************************
changed: [svdpdac015.techlabs.accenture.com]

TASK: [ansible-zookeeper | Start zookeeper] ***********************************
failed: [svdpdac015.techlabs.accenture.com] => {"failed": true}
msg: zookeeper: unrecognized service
zookeeper: unrecognized service

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
to retry, use: --limit @/root/playbook.retry

svdpdac015.techlabs.accenture.com : ok=21 changed=1 unreachable=0 failed=1
svdpdac016.techlabs.accenture.com : ok=6 changed=0 unreachable=0 failed=0
svdpdac017.techlabs.accenture.com : ok=6 changed=0 unreachable=0 failed=0

Version format incompatability

[WARNING]: - ansibleshipyard.ansible-zookeeper was NOT installed successfully: Unable to compare role versions (v0.0.3, v0.0.4, v0.0.5, v0.0.6, v0.9.0, v0.9.1, v0.9.1.1,
v0.9.2, v0.10.0, v0.11.0, v0.12.0, v0.13.0, v0.14.0, v0.15.0, v0.16.0, v0.16.1, v0.16.2, v0.18.0, v0.19.0, v0.20.0, v0.21.0, v0.22.0, 0.23.0, v0.17.0) to determine the most
recent version due to incompatible version formats. Please contact the role author to resolve versioning conflicts, or specify an explicit role version to install.

Cluster of several Zookeepers

Hi,

how can it be managed to setup a cluster with e.g. 5 Zookeeper instances (everyone having a different id)?

regards
guenther

fix ansible-lint warnings

During my work of integrating molecule testing (see #60), I got following 'ansible-lint' warnings (that need to be fixed, before molecule runs the actual tests):

➜  ansible-zookeeper git:(master) ✗ molecule verify
--> Executing ansible-lint...
[ANSIBLE0002] Trailing whitespace
/Users/lhoss/IdeaProjects/ansible-zookeeper/meta/main.yml:10
  # the ones that apply to your role. If you don't see your

[ANSIBLE0002] Trailing whitespace
/Users/lhoss/IdeaProjects/ansible-zookeeper/meta/main.yml:116


[ANSIBLE0006] tar used in place of unarchive module
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:12
Task/Handler: Unpack tarball.

[ANSIBLE0006] tar used in place of unarchive module
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:12
Task/Handler: Unpack tarball.

[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:16
Task/Handler: group name=zookeeper system=yes

[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:16
Task/Handler: group name=zookeeper system=yes

[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:17
Task/Handler: user system=yes name=zookeeper group=zookeeper

[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:17
Task/Handler: user system=yes name=zookeeper group=zookeeper

[ANSIBLE0009] Octal file permissions must contain leading zero
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:42
Task/Handler: Add zookeeper's bin dir to the PATH

[ANSIBLE0009] Octal file permissions must contain leading zero
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:42
Task/Handler: Add zookeeper's bin dir to the PATH

Note: Even though these warnings might be sometimes of 'subjective' nature, I aim to follow them (on the roles I will gradually move to molecule).
I'ld work on fixing myself (but only later this week), unless somebody is faster :)

The conditional check 'etc_init.stat.exists == true' failed when running with tag deploy

I use the 0.9.2 version installed with ansible-galaxy.

I am trying to create an image with Packer and then run deploy stage when deploying servers.
I skip the deploy tag when creating the image and run deploy tag only when running servers.
Running deploy tag results in the following error:

FAILED! => {"failed": true, "msg": "The conditional check 'etc_init.stat.exists == true' failed. The error was: error while evaluating conditional (etc_init.stat.exists == true): 'etc_init' is undefined\n\nThe error appears to have been in '/etc/ansible/roles/AnsibleShipyard.ansible-zookeeper/tasks/RedHat.yml': line 37, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Upstart script.\n ^ here\n"}

I checked, the reason for that is the etc_init task is not tagged with deploy, so it does not get executed when running with deploy tag only. I changed it locally and the problem was gone. I had to add the tag to two tasks (I link to the release I am using):
https://github.com/AnsibleShipyard/ansible-zookeeper/blob/v0.9.2/tasks/RedHat.yml#L33
https://github.com/AnsibleShipyard/ansible-zookeeper/blob/v0.9.2/tasks/RedHat.yml#L44

I have noticed that newer version does not perform that check for systemd, so I think it could be a good idea to remove that check for upstart as well, since in upstart case, I think the error will remain in the newer versions as well.

data_dir & log_dir not properly initialized for debian/ubuntu (non tarball deployment)

Just found out on a deployment on Ubuntu 16.04
I will def. provide a PR fix ASAP, by extracting the working task (from tarball.yml) out into common-config.yml

- name: "Create zookeeper {{item}} directory."
  file: path={{item}} state=directory owner=zookeeper group=zookeeper
  tags: bootstrap
  with_items:
    - "{{data_dir}}"
    - "{{log_dir}}"

Actually I'ld like todo the same with (not too old) task/feature (also in tarball):

- name: Add zookeeper's bin dir to the PATH
  copy: content="export PATH=$PATH:{{zookeeper_dir}}/bin" dest="/etc/profile.d/zookeeper_path.sh" mode=755
  when: zookeeper_register_path_env

@ernestas-poskus ok for you todo in the same PR ?

ps: Finally we would like to apply more cleanups in the role (sep. PR!), especially renaming those vars without zookeeper_ prefix.
Again the question howto best keep downward compatibility (after renaming).
I'ld propose to add an info block in the README incl. howto map from new->old vars (rather than add an automatic mapping in the default vars)

data_dir: zookeeper_data_dir
log_dir: zookeeper_ log_dir

please update version to 0.23 on galaxy.ansible.com

ansible-galaxy install AnsibleShipyard.ansible-zookeeper
- downloading role 'ansible-zookeeper', owned by AnsibleShipyard
- downloading role from https://github.com/AnsibleShipyard/ansible-zookeeper/archive/v0.22.0.tar.gz
- extracting AnsibleShipyard.ansible-zookeeper to /home/user/.ansible/roles/AnsibleShipyard.ansible-zookeeper
- AnsibleShipyard.ansible-zookeeper (v0.22.0) was installed successfully

Automatically Install for Dependencies

Hello!
ansible-galaxy install AnsibleShipyard.ansible-zookeeper

Run playbook

- hosts: zookeeper
  become: yes
  roles:
    - role: AnsibleShipyard.ansible-zookeeper

But dependencies dont installed automatically.
Please add Automatically Install for Dependencies

Nov 20 10:16:34 localhost zkServer.sh: Using config: /opt/zookeeper-3.4.12/bin/../conf/zoo.cfg
Nov 20 10:16:34 localhost zkServer.sh: /opt/zookeeper-3.4.12/bin/zkServer.sh: line 170: exec: java: not found
Nov 20 10:16:34 localhost systemd: zookeeper.service: main process exited, code=exited, status=127/n/a

Missing option to pass autopurge configuration

In template zoo.cfg.j2 there is no way of passing autopurage configuration.
It can be simple added by :

{% if zookeeper_autopurge_purgeInterval > 0 %}
autopurge.purgeInterval={{ zookeeper_autopurge_purgeInterval }}
autopurge.snapRetainCount={{ zookeeper_autopurge_snapRetainCount }}
{% endif %}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.