Sentry Installation via Docker
Aug 31, 2017
6 minute read

Installing Sentry with Docker

There is a second method of installing Sentry i.e. via Docker . the only dependency listed is , - Docker CE installed

This setup is tested on Ubuntu 16.04 Server.

Installing Docker

For this method I have a Vultr(https://www.vultr.com/?ref=7135220) instance running Ubuntu 16.04. Docker Community Edition (CE) is ideal for developers and small teams looking to get started with Docker and experimenting with container-based apps.

Following are the steps I followed from the docker guide.

Pre-set up

  • Go to server section
  • Select the OS
  • Select a method for installation. Here, I have chosen the recommended approach which is setting up Docker Repository and installing from them.

Install using the repository

Setting up the repository

Update the packages using

deploy@vultr$ sudo apt-get update
deploy@vultr$ sudo apt-get install build-essential

Install packages which will allow apt to communicate over HTTPS

deploy@vultr$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common

Next is adding Docker’s Official key. Here, we are adding Docker’s key. apt-key is used to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys will be considered trusted.

deploy@vultr$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Verifying the fingerprint.

deploy@vultr$ sudo apt-key fingerprint 0EBFCD88

As of now the docker lists following key as fingerprint.

9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88

Finally to set up a stable repository,

deploy@vultr$ sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

Install Docker CE

After we have added the repository, its time to update the indexes once more

deploy@vultr$ sudo apt-get update
deploy@vultr$ sudo apt-get install docker-ce

Verifying the installation

Verify that Docker CE is installed correctly by running the hello-world image.

deploy@vultr$ sudo docker run hello-world

At this point, if you are wondering what goes behind, these steps provide exactly what process Docker follows,

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

Post-installation

As you might know usage of root account is not advisable, user accounts with sudo proviledges are created. We can manage docker as non-root user by adding a group docker. As per the documentation,

The docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The docker daemon always runs as the root user. If you don’t want to use sudo when you use the docker command, create a Unix group called docker and add users to it. When the docker daemon starts, it makes the ownership of the Unix socket read/writable by the docker group.

deploy@vultr$ sudo groupadd docker

Add deploy user to the docker group.

deploy@vultr$ sudo usermod -aG docker $USER

Verify if we are able to run docker commands without using sudo

deploy@vultr$ docker run hello-world

Building Container

Now that we have installed docker, as per the documentation, in order for Sentry to be installed as a container we need to clone/fork from gensentry/promise

deploy@vultr$ git clone https://github.com/getsentry/onpremise

to build a local image, run

deploy@vultr:~/onpremise$ sudo make build

Running Dependent Services

Sentry requires Redis and Postgres. In order to run containers, docker provides a command run.

deploy@vultr$ sudo docker run

Various options are listed here

Redis

deploy@vultr:~/onpremise$ sudo docker run \
  --detach \
  --name sentry-redis \
  redis:3.2-alpine

PostgreSQL

deploy@vultr:~/onpremise$ sudo docker run \
  --detach \
  --name sentry-postgres \
  --env POSTGRES_PASSWORD=secret \
  --env POSTGRES_USER=sentry \
  postgres:9.5

Outbound email

deploy@vultr:~/onpremise$ sudo docker run \
  --detach \
  --name sentry-smtp \
  tianon/exim4

Running Sentry Services

To test if the Sentry service is built properly, run: by default, the $REPOSITORY is sentry-onpremise, which is again not set.

deploy@vultr:~/onpremise$ sudo docker run \
   --rm sentry-onpremise \
   --help

We should see Sentry help output. Next is to generate secret key-value, for this run:

deploy@vultr:~/onpremise$ sudo docker run \
   --rm sentry-onpremise \
 config generate-secret-key

The output of the above command will generate a key. Copy the key which gets printed store it in environment variable named $SENTRY_SECRET_KEY

To store it for all the further login sessions, we have to store in ~/.bashrc file, so that even if our SSH session gets expired and if we relogin, our $SENTRY_SECRET_KEY always gets the value.

deploy@vultr$echo 'export SENTRY_SECRET_KEY="xxxxxxx......."' >> ~/.bashrc
deploy@vultr$source ~/.bashrc
deploy@vultr$echo $SENTRY_SECRET_KEY

Running Migrations

When running migrations, as per the documentation , don’t run using --detach flag, as it will ask for creating a user. This step requires interactive mode and --detach makes the whole process to be run in background.

deploy@vultr$ sudo docker run \
  --link sentry-redis:redis \
  --link sentry-postgres:postgres \
  --link sentry-smtp:smtp \
  --env SENTRY_SECRET_KEY=${SENTRY_SECRET_KEY} \
  --rm -it sentry-onpremise upgrade

Running Sentry Web-Service

Once the migrations get finish, now its time to start our Sentry App as web-service.

deploy@vultr$ sudo docker run \
  --detach \
  --name sentry-web-01 \
  --publish 9000:9000 \
  --link sentry-redis:redis \
  --link sentry-postgres:postgres \
  --link sentry-smtp:smtp \
  --env SENTRY_SECRET_KEY=${SENTRY_SECRET_KEY} \
  sentry-onpremise \
  run web

This will get the uWSGI (App server) and its workers.

Running Sentry Background Workers

For Sentry to effectively catch errors(as jobs) from different applications we need background workers so that they can queue the jobs. For this, Sentry uses Celery.

deploy@vultr$ sudo docker run \
  --detach \
  --name sentry-worker-01 \
  --link sentry-redis:redis \
  --link sentry-postgres:postgres \
  --link sentry-smtp:smtp \
  --env SENTRY_SECRET_KEY=${SENTRY_SECRET_KEY} \
  sentry-onpremise \
  run worker

Running Sentry Cron Process

Documentation specifies to start the cron process just once. This is to avoid processing the same task twice.

deploy@vultr$ sudo docker run \
  --detach \
  --name sentry-cron \
  --link sentry-redis:redis \
  --link sentry-postgres:postgres \
  --link sentry-smtp:smtp \
  --env SENTRY_SECRET_KEY=${SENTRY_SECRET_KEY} \
  sentry-onpremise \
  run cron

Once all this is done, we can check the login on http://xxx.xxx.xxx.xxx:9000

Helper Docker Commands:

  1. To be able to check the list of all the docker containers which are running

(by default without any flags)

deploy@vultr$ sudo docker ps
  1. To be able to check list of docker containers

-a: lists ALL regardless of whether containers are running or exited

deploy@vultr$ sudo docker ps -a
  1. All the above commands will return the container IDs of docker containers which ran. In order to be able to check the logs of what has happened, we need to use
deploy@vultr$ sudo docker ps -a
# copy the container-id
deploy@vultr$ sudo docker logs <container-id>
  1. We might get errors if we have run commmands which failed the first time and when we run second time with some modifications, it says <container-id> is using it. For this, we need to remove such <container-ids>. To remove containers with the status=exited.
# lists all the IDs which are exits
deploy@vultr$ docker ps -a -f status=exited
# using docker rm; rm = remove 
deploy@vultr$ docker rm $(docker ps -a -f status=exited -q)


comments powered by Disqus