[001.3] Log aggregation with LogDna

Shipping logs to a centralized service

Subscribe now

Log aggregation with LogDna [01.10.2018]

Today, we'll be walking through configuring logging for our Produciton app. We'll be going over changing the formatting of our logs and learn how to ship those logs from CloudWatch to an external log aggregation service, Logdna in our case.


In our last video, we went through setting up our container using ECS with Fargate. However, we didn't spend much time going over how we configured our logging. So, let's take a moment to talk about our current situation.

One of the first steps we took was configuring our task-definition. Part of that process required configuring several container settings and adding a few environment variables. One of those environment variables that we set is RAILS_LOG_TO_STDOUT.

If you haven't seen this environment variable before, it's because it was introduced in Rails 5 and switches a bit of code in config/environments/production.rb to allow the rails logger to send logs to STDOUT instead of the usual log/production.log file.

Here's the actual implementation:

# config/environments/production.rb

if ENV['RAILS_LOG_TO_STDOUT'].present?
  logger           = ActiveSupport::Logger.new(STDOUT)
  logger.formatter = config.log_formatter
  config.logger    = ActiveSupport::TaggedLogging.new(logger)

This is useful for us, simply because by default, the AWS Container sends it's STDOUT to CloudWatch, which means we got some basic log aggregation for free.

Getting Started

For our logging, we are going to be specifically addressing two things:

  • Formatting the logs from Rails
  • Shipping the logs from CloudWatch to Logdna

Configuring our log format

When it comes to formatting our logs, plain text logs are not the best solution for parsing meaningful information. However, plain text is often easier for the eye to parse, especially when it is in a format that developers have become accustomed to over many years. For that reason, we will leave our development logs in a text format, but we will change our production log formatting to JSON.

In order to change our formatting, we are going to be using the Lograge library. Implementation of the LogRage library is straightforward. Let's start by adding the gems we'll need:

gem 'lograge'
gem 'logstash-event'

After adding the gems, run bundle and we'll update our config/environments/production.rb to use the new logger format, since we only want our production logs to be formatted differently. We're going to add this block of code at the bottom of our file:

config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new

Now, if we run our app, in production mode, we should see the difference in the formatted output. Since, we already have the ability to connect to our RDS instance locally, we can test running it locally before committing our code and pushing up a new image.

➜  produciton.com git:(bprice/setup-docker)RAILS_SERVE_STATIC_FILES=true SECRET_KEY_BASE=<key> DATABASE_URL=<url> RAILS_LOG_TO_STDOUT=true RAILS_ENV=production bundle exec rails s
=> Booting Puma
=> Rails 5.1.4 application starting in production
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.11.0 (ruby 2.4.1-p111), codename: Love Song
* Min threads: 5, max threads: 5
* Environment: production
* Listening on tcp://
Use Ctrl-C to stop
I, [2017-12-19T23:11:07.386456 #50448]  INFO -- : [e5f57176-fae6-4ddb-8a85-a470adb29bea] {"method":"GET","path":"/users/sign_in","format":"html","controller":"Devise::SessionsController","action":"new","status":200,"duration":1000.41,"view":67.35,"db":202.36,"@timestamp":"2017-12-20T05:11:07.385Z","@version":"1","message":"[200] GET /users/sign_in (Devise::SessionsController#new)"}
I, [2017-12-19T23:11:13.581738 #50448]  INFO -- : [248bab73-ccae-4509-8886-e6fcf0acdf06] {"method":"GET","path":"/users/sign_in","format":"html","controller":"Devise::SessionsController","action":"new","status":200,"duration":3.68,"view":2.39,"db":0.0,"@timestamp":"2017-12-20T05:11:13.581Z","@version":"1","message":"[200] GET /users/sign_in (Devise::SessionsController#new)"}

It looks like rails is formatting our logs correctly now, so we can commit these changes and push up a new image.

➜  produciton.com git:(bprice/setup-logging) docker build -t production .
(concatenated output...)
Successfully tagged production:latest
➜  produciton.com git:(bprice/setup-logging) docker tag production:latest 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton
➜  produciton.com git:(bprice/setup-logging) docker push 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton
(concatenated output...)
latest: digest: sha256:5e30632f84afb024094a76008079b514b744df87443e9487b5d2382f8cceb81f size: 2840

Now, we can go to the AWS console and restart our app. To do this, we will need to navigate to Elastic Container Services and go to the Clusters section. Once there, select the produciton service and click the update button.

This takes us to the Update Service page that has 4 steps. The only change we need to make here is to select the force new deployment option and click Next step through all of the pages until the last page, then we will click Update Service. This will take us to a page to with the status of the changes. We can click the View Service button to get back to our service overview.

Note: There are other ways that we can restart our service both from the AWS console and the command line. I've just chose this method.

Once we are back on our Service overview page, we should see a second task starting up. What is happening here is a new task is being started (with the latest image we pushed) and once running, the older running task will be removed.

service overview

Once the new task is up and running, we can find the IP by selecting the task, clicking on the ENI and getting the address in the IPv4. Now, if we navigate to that IP address, we should see that our app is still running. Also, we can click on the Logs tab in our Task details page and we should see our logs are now being delivered as json.

task logs

Shipping our logs to Logdna

The last step of this process is configuring AWS to ship our logs over to Logdna. The process of sending our logs to Logdna requires setting up a lambda function on AWS and configuring our CloudWatch LogGroup to stream logs to the lambda function.

The first step is to head over to Logdna and setup an account. Once the account is setup, you'll need to login, click on the tab with the cog icon in the top left, and choose Organization in the left navigation menu.

Once you've done that, you should be on the Manage Organization Profile page. You'll need to get the ingestion key as we'll need that in the next step.

logdna manage org

Now, let's move back over to the AWS Console and go to the Services dropdown and choose Lambda. Click the Create a function button. On this page, we are going to skip down to the Author from Scratch section and enter the following info:

  • Name: LogDNA
  • Runtime: Python 2.7
  • Role: Create a new role from template(s) (unless you already have an iam user setup that you can use)
  • Role name: basic_lambda_edge_role
  • Policy Templates: Basic Edge Lambda permissions

The page should now look like this:

create lambda function

From here, if we click the Create Function button at the bottom, it should create the lambda function and send us to the configuration page. From here, we need to scroll down to the Function code section and set these values:

Also, we need to create a LOGDNA_KEY environment variable with the ingestion key from LogDNA. Once all of that is completed, our configuration should look similar to this:

lambda config

At this point, we can click Save in the top right and navigate to the CloudWatch console.

Once we are in the CloudWatch console, we need to click on Logs in the left navigation menu and find our Log Group for our ECS Service. If you follow along with the last video, it will probably be called /ecs/Web. Once you have found your Log Group, select the button on the left of the row and choose Stream to AWS Lambda from the Actions dropdown.

cloudwatch log groups

Once you click Stream to AWS Lambda it will redirect you to another page to choose the Lambda Function. Choose LogDNA and click next. Now, it should ask you to choose a log format. Select Json and click next. On the last page, click Start Streaming.

At this point, we should be redirected back to the CloudWatch Log Groups page and we can see Lambda (LogDNA) under the Subscriptions column for our service's log group.

Finally, we can head back over to our produciton app and refresh the page a few times to make some traffic. Once we've done that, we can head back over to LogDNA, click on Everything from the left navigation pane and see our logs being aggragated.



Today we dove into configuring our rails application to format logs using a Json formatter, at the environment level. We also learned the basics of creating a lambda function and configuring CloudWatch to stream the logs to Logdna using that lambda.

Take some time to go through Logdna and look through all of the features. Some of features they offer include natural language searching, the ability to setup views for filters you might frequently use, and a decent assortment of filter options.