Today, we're going to be configuring application monitoring for our produciton app using Scout. In the last video, we configured AWS to ship our logs to Logdna, but Logdna is more of a log management tool and not necessarily a performance monitoring/analysis tool. Using Scout, we can easily find performance bottlenecks within our app and database layers.
Let's dive in.
Let's get started by visiting Scout and creating a new account. Once we've setup a new account, it's pretty simple to get going. The first page you're taken to after being logged in is a single task
Choose your language.
For this, we're going to select
Ruby on Rails, which then takes us to this page
app setup scout
Since we’re using environment variables, we’ll use that to configure Scout. We don’t actually need a configuration file. We’ll set two variables:
- SCOUT_KEY - our Scout API KEY
- SCOUT_DEV_TRACE - This adds a small speed badge in our browser in development mode. Clicking the badge reveals a transaction trace. We can use this to confirm Scout is working.
scout_apm to our gemfile, let’s run the app locally to ensure it works:
➜ produciton.com git:(bprice/setup-scout) ✗ SCOUT_KEY=<scout_key> SCOUT_DEV_TRACE=true RAILS_SERVE_STATIC_FILES=true SECRET_KEY_BASE=<secret_key> DATABASE_URL=<url> RAILS_LOG_TO_STDOUT=true bundle exec rails s
Now, open a browser tab and hit
localhost:3000 to drive some traffic and logs.
Let's switch back over to Scout now and see what's happened.
Well, it doesn't get much easier than that. It's already sent some logs up to the Scout server and it's recognized quite a few things about our app already.
Now, we're ready to commit these changes and push the new image up to AWS.
➜ produciton.com git:(bprice/setup-scout) docker build -t production .
Successfully tagged production:latest
➜ produciton.com git:(bprice/setup-scout) docker tag production:latest 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton
➜ produciton.com git:(bprice/setup-scout) docker push 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton
latest: digest: sha256:8f0ace7abf2c3a0277c7491497f3b445518031596ce5e13a86704ee79871172b size: 2840
This time will be a little different than the last time we had to make a change to the code and push a new build, because we will be making a change to the task definition. Before, we only needed to kick off a new task being started and removing the old running task. However, for this scenario, we'll need to go to the AWS console and go back to our task definitions. Let's select our task definition, which should take us to the task definition details page.
task definition details
Now, let's make sure we select the latest revision and click
Create new revision. On the next page, we need to scroll down to the
Container Definitions section and click on our
production container name, which should slide out a modal. From here, we need to scroll down and add a new environment variable called SCOUT_KEY and add our key.
task definition edit
Once we are finished, we can click update then scroll to the bottom and click
Create. We should now be looking at the details page for the new revision of the task definition.
Now, we can navigate over to our
Clusters section from the left menu and choose our
Produciton cluster. From here, we need to check the checkbox for our
produciton service and click
update. This should take us to this page:
Let's select our newest task definition for our service. The newest revision for my task definition is
Webapp:4 and make sure we select
Force new deployment. After that, we can click
Next step until we are on the last step, then we can click
Now, we can navigate back to our
produciton service's details page and wait for the new service to start. Once it's running, we can click on the task, click on it's
ENI id, and get our new IP address to make some traffic.
Once we've hit the service and generated some traffic, let's head back over to Scout and make sure our logs are showing up.
It looks like our traffic is showing up.
We can even notice some cool features right out of the gate. One of the first things I noticed is the little rocket signifying a new deploy. If you hover over the icon, it gives you the short git sha and time that it was deployed.
At the bottom, we can see that it's showing the node from which we are gathering traffic and that it's currently reporting traffic and that we're barely using any of our cpu and only 133MB of the 512mb's we've allotted to our container.
I used Apache Benchmark to send over some concurrent traffic, so toward the end of the graph, we can see throughput (yellow line) spike up to around 4k request per minute.
Also, you can see that I've highlighted a portion of the graph, and the metrics below it are showing the metrics for that time frame. From that I can gather that the average response time was 8.2 ms with a throughput of 345.4 request per minute.
Let's look at a few other screenshots
slowest response times
When you highlight a timeframe, this modal pops up allowing you to see controller actions that are taking the longest. Our app is extremely basic at the moment, so it makes sense that devise would be the slowest part of our app.
largest memory increase
However, if we click on the dropdown, we can choose
Largest Memory Increase, so with a simple click we can now see what actions are causing us the most memory, which would allow us to easily pinpoint subpar code.
Scout enables you to setup alerts (notifications) and notification groups for all sorts of scenarios like:
- throughput is greater than 2000 request per minute for more than 5 minutes.
- errors is greater than 5 per minute for more than 5 minutes.
- apdex is less than .9 for 15 minutes
Scout also enables background job monitoring and database monitoring. Which are definitely features that you'd want if you were running a production service. The production app is definitely not doing much, so we can't really begin to see how powerful and useful Scout would be for performance analysis.
In this video, we setup Scout for our application monitoring and learned how to make a revision to our task definition. We also took a dive into Scout and seeing how well it could work for us uncovering bottlenecks with our production service.